[jira] [Commented] (HADOOP-16158) DistCp to support checksum validation when copy blocks in parallel

2019-04-12 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816848#comment-16816848
 ] 

Kai Xie commented on HADOOP-16158:
--

Thanks Steve for the review comments and backporting tips!

I fixed those comments and submitted patch 005 for trunk. Let's see if jenkins 
is happy

 

> DistCp to support checksum validation when copy blocks in parallel
> --
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch, HADOOP-16158-005.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16158) DistCp to support checksum validation when copy blocks in parallel

2019-04-12 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16158:
-
Attachment: HADOOP-16158-005.patch

> DistCp to support checksum validation when copy blocks in parallel
> --
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch, HADOOP-16158-005.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16158) DistCp to support checksum validation when copy blocks in parallel

2019-04-12 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16158:
-
Status: Patch Available  (was: Open)

> DistCp to support checksum validation when copy blocks in parallel
> --
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.1.2, 3.0.3, 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch, HADOOP-16158-005.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16158) DistCp to support checksum validation when copy blocks in parallel

2019-04-12 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16158:
-
Status: Open  (was: Patch Available)

> DistCp to support checksum validation when copy blocks in parallel
> --
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.1.2, 3.0.3, 2.9.2, 3.2.0
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch, HADOOP-16158-005.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15218) Make Hadoop compatible with Guava 22.0+

2019-04-12 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15218:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Make Hadoop compatible with Guava 22.0+
> ---
>
> Key: HADOOP-15218
> URL: https://issues.apache.org/jira/browse/HADOOP-15218
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
> Attachments: HADOOP-15218-001.patch
>
>
> Deprecated HostAndPort#getHostText method was deleted in Guava 22.0 and new 
> HostAndPort#getHost method is not available before Guava 20.0.
> This patch implements getHost(HostAndPort) method that extracts host from 
> HostAndPort#toString value.
> This is a little hacky, that's why I'm not sure if it worth to merge this 
> patch, but it could be nice if Hadoop will be Guava-neutral.
> With this patch Hadoop can be built against latest Guava v24.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-12 Thread GitBox
arp7 commented on issue #737: HDDS-1198. Rename chill mode to safe mode. 
Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737#issuecomment-482777008
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-12 Thread GitBox
swagle commented on issue #737: HDDS-1198. Rename chill mode to safe mode. 
Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737#issuecomment-482776760
 
 
   @arp7: The 3 tests failures seem to be due to OOM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #737: HDDS-1198. Rename chill mode to safe 
mode. Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737#issuecomment-482776580
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 93 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 26 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for branch |
   | -1 | mvninstall | 39 | root in trunk failed. |
   | -1 | compile | 55 | root in trunk failed. |
   | +1 | checkstyle | 234 | trunk passed |
   | -1 | mvnsite | 42 | integration-test in trunk failed. |
   | +1 | shadedclient | 1320 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 307 | trunk passed |
   | +1 | javadoc | 175 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | client in the patch failed. |
   | +1 | compile | 1288 | the patch passed |
   | -1 | cc | 1288 | root generated 9 new + 0 unchanged - 0 fixed = 9 total 
(was 0) |
   | -1 | javac | 1288 | root generated 1409 new + 87 unchanged - 0 fixed = 
1496 total (was 87) |
   | +1 | checkstyle | 203 | the patch passed |
   | +1 | mvnsite | 296 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 669 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 423 | the patch passed |
   | +1 | javadoc | 243 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 32 | client in the patch passed. |
   | +1 | unit | 117 | common in the patch passed. |
   | +1 | unit | 140 | server-scm in the patch passed. |
   | +1 | unit | 30 | tools in the patch passed. |
   | +1 | unit | 45 | common in the patch passed. |
   | -1 | unit | 937 | integration-test in the patch failed. |
   | +1 | unit | 80 | ozone-manager in the patch passed. |
   | +1 | unit | 180 | tools in the patch passed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 7313 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/737 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  xml  |
   | uname | Linux a6d78e13341a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1943db5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/artifact/out/branch-compile-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/artifact/out/patch-mvninstall-hadoop-hdds_client.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/artifact/out/diff-compile-cc-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/artifact/out/diff-compile-javac-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/testReport/ |
   | Max. process+thread count | 3078 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-hdds/tools hadoop-ozone/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/2/console |
   | Powered by | Apache Ye

[jira] [Updated] (HADOOP-16158) DistCp to support checksum validation when copy blocks in parallel

2019-04-12 Thread Kai Xie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Xie updated HADOOP-16158:
-
Summary: DistCp to support checksum validation when copy blocks in parallel 
 (was: DistCp to supports checksum validation when copy blocks in parallel)

> DistCp to support checksum validation when copy blocks in parallel
> --
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2019-04-12 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Attachment: HADOOP-15124.001.patch
Status: Patch Available  (was: Open)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2019-04-12 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Attachment: (was: HADOOP-15124.001.patch)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2019-04-12 Thread Igor Dvorzhak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Dvorzhak updated HADOOP-15124:
---
Status: Open  (was: Patch Available)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.1.0, 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #703: HDDS-1371. Download RocksDB checkpoint 
from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#issuecomment-482771265
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 60 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1138 | trunk passed |
   | +1 | compile | 1383 | trunk passed |
   | +1 | checkstyle | 268 | trunk passed |
   | +1 | mvnsite | 366 | trunk passed |
   | +1 | shadedclient | 1560 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 352 | trunk passed |
   | +1 | javadoc | 258 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | -1 | mvninstall | 27 | integration-test in the patch failed. |
   | +1 | compile | 1408 | the patch passed |
   | +1 | javac | 1408 | the patch passed |
   | +1 | checkstyle | 237 | the patch passed |
   | +1 | mvnsite | 291 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 810 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 372 | the patch passed |
   | +1 | javadoc | 243 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 112 | common in the patch passed. |
   | +1 | unit | 44 | client in the patch passed. |
   | +1 | unit | 55 | common in the patch passed. |
   | -1 | unit | 1167 | integration-test in the patch failed. |
   | +1 | unit | 82 | ozone-manager in the patch passed. |
   | +1 | unit | 81 | ozone-recon in the patch passed. |
   | +1 | asflicense | 84 | The patch does not generate ASF License warnings. |
   | | | 10358 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/703 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux eedce58a368f 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 5379d85 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/2/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/2/testReport/ |
   | Max. process+thread count | 3938 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-12 Thread GitBox
arp7 commented on issue #737: HDDS-1198. Rename chill mode to safe mode. 
Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737#issuecomment-482770606
 
 
   Looks like the anzix CI run did not get triggered. Reapplied label to 
trigger it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-12 Thread GitBox
swagle commented on issue #737: HDDS-1198. Rename chill mode to safe mode. 
Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737#issuecomment-482768610
 
 
   Manually ran and verified all of the above tests, following test is an 
actual issue and updating patch with a fix:
   hadoop.hdds.scm.server.TestSCMClientProtocolServer


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #737: HDDS-1198. Rename chill mode to safe 
mode. Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737#issuecomment-482767473
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 21 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 26 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 76 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1183 | trunk passed |
   | +1 | compile | 2225 | trunk passed |
   | +1 | checkstyle | 419 | trunk passed |
   | -1 | mvnsite | 83 | integration-test in trunk failed. |
   | +1 | shadedclient | 2211 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 843 | trunk passed |
   | +1 | javadoc | 1193 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 106 | Maven dependency ordering for patch |
   | -1 | mvninstall | 62 | client in the patch failed. |
   | +1 | compile | 1545 | the patch passed |
   | +1 | cc | 1545 | the patch passed |
   | +1 | javac | 1545 | the patch passed |
   | +1 | checkstyle | 210 | the patch passed |
   | +1 | mvnsite | 300 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 426 | the patch passed |
   | +1 | javadoc | 252 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 31 | client in the patch passed. |
   | +1 | unit | 90 | common in the patch passed. |
   | -1 | unit | 100 | server-scm in the patch failed. |
   | +1 | unit | 30 | tools in the patch passed. |
   | +1 | unit | 45 | common in the patch passed. |
   | -1 | unit | 846 | integration-test in the patch failed. |
   | +1 | unit | 53 | ozone-manager in the patch passed. |
   | +1 | unit | 88 | tools in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 13802 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.server.TestSCMClientProtocolServer |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/737 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  xml  |
   | uname | Linux 8112f4ef17e6 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 62f4808 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/1/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/1/artifact/out/patch-mvninstall-hadoop-hdds_client.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/1/testReport/ |
   | Max. process+thread count | 4328 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-hdds/tools hadoop-ozone/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-737/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.ap

[jira] [Commented] (HADOOP-16237) Fix new findbugs issues after update guava to 27.0-jre in hadoop-project trunk

2019-04-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816791#comment-16816791
 ] 

Hudson commented on HADOOP-16237:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16402 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16402/])
HADOOP-16237. Fix new findbugs issues after updating guava to 27.0-jre. 
(stevel: rev 1943db557124439f9f41c18a618455ccf4c3e6cc)
* (edit) hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/src/main/java/org/apache/hadoop/yarn/server/timelineservice/documentstore/writer/cosmosdb/CosmosDBDocumentStoreWriter.java
* (edit) hadoop-common-project/hadoop-kms/dev-support/findbugsExcludeFile.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/src/main/java/org/apache/hadoop/yarn/server/timelineservice/documentstore/reader/cosmosdb/CosmosDBDocumentStoreReader.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/src/main/java/org/apache/hadoop/yarn/server/timelineservice/documentstore/collection/document/flowrun/FlowRunDocument.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/src/main/java/org/apache/hadoop/yarn/server/timelineservice/documentstore/collection/document/entity/TimelineEntityDocument.java


> Fix new findbugs issues after update guava to 27.0-jre in hadoop-project trunk
> --
>
> Key: HADOOP-16237
> URL: https://issues.apache.org/jira/browse/HADOOP-16237
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HADOOP-16237.001.patch, HADOOP-16237.002.patch, 
> HADOOP-16237.003.patch, 
> branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html, 
> branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html,
>  
> branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html,
>  
> branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html
>
>
> There are a bunch of new findbugs issues in the build after committing the 
> guava update.
> Mostly in yarn, but we have to check and handle those.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14747) S3AInputStream to implement CanUnbuffer

2019-04-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816783#comment-16816783
 ] 

Hudson commented on HADOOP-14747:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16401 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16401/])
HADOOP-14747. S3AInputStream to implement CanUnbuffer. (stevel: rev 
2382f63fc0bb4108f3f3c542b4be7c04fbedd7c4)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/contract/hdfs/TestHDFSContractUnbuffer.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AUnbuffer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/contract/hdfs.xml
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUnbuffer.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractUnbufferTest.java
* (edit) hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractUnbuffer.java


> S3AInputStream to implement CanUnbuffer
> ---
>
> Key: HADOOP-14747
> URL: https://issues.apache.org/jira/browse/HADOOP-14747
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> HBase relies on FileSystems implementing {{CanUnbuffer.unbuffer()}} to force 
> input streams to free up remote connections (HBASE-9393). This works for 
> HDFS, but not elsewhere.
> S3A input stream can implement {{CanUnbuffer.unbuffer()}} by closing the 
> input stream and relying on lazy seek to reopen it on demand.
> Needs
> * Contract specification of unbuffer. As in "who added a new feature to 
> filesystems but forgot to mention what it should do?"
> * Contract test for filesystems which declare their support. 
> * S3AInputStream to call {{closeStream()}} on a call to {{unbuffer()}}.
> * Test case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #710: HADOOP-16237. Fix new findbugs issues after update guava to 27.0-jre …

2019-04-12 Thread GitBox
steveloughran commented on issue #710: HADOOP-16237. Fix new findbugs issues 
after update guava to 27.0-jre …
URL: https://github.com/apache/hadoop/pull/710#issuecomment-482765184
 
 
   committed to trunk


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16237) Fix new findbugs issues after update guava to 27.0-jre in hadoop-project trunk

2019-04-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16237:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

+1, committed. Thanks

> Fix new findbugs issues after update guava to 27.0-jre in hadoop-project trunk
> --
>
> Key: HADOOP-16237
> URL: https://issues.apache.org/jira/browse/HADOOP-16237
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HADOOP-16237.001.patch, HADOOP-16237.002.patch, 
> HADOOP-16237.003.patch, 
> branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html, 
> branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html,
>  
> branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html,
>  
> branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html
>
>
> There are a bunch of new findbugs issues in the build after committing the 
> guava update.
> Mostly in yarn, but we have to check and handle those.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16247) NPE in FsUrlConnection

2019-04-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816780#comment-16816780
 ] 

Hadoop QA commented on HADOOP-16247:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestFsUrlConnectionPath |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16247 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965773/HADOOP-16247-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4f12971237f1 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5379d85 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16149/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16149/testReport/ |
| Max. process+thread count | 1397 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16149/console |
| Powered by | Apache Yetus

[GitHub] [hadoop] steveloughran commented on issue #690: HADOOP-14747: S3AInputStream to implement CanUnbuffer

2019-04-12 Thread GitBox
steveloughran commented on issue #690: HADOOP-14747: S3AInputStream to 
implement CanUnbuffer
URL: https://github.com/apache/hadoop/pull/690#issuecomment-482764337
 
 
   merged. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #690: HADOOP-14747: S3AInputStream to implement CanUnbuffer

2019-04-12 Thread GitBox
steveloughran closed pull request #690: HADOOP-14747: S3AInputStream to 
implement CanUnbuffer
URL: https://github.com/apache/hadoop/pull/690
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14747) S3AInputStream to implement CanUnbuffer

2019-04-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14747:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

reviewed, no issues, liked the docs & new tests. +1,  committed to trunk.

No reason why this can't be backported to branch-3.2;

> S3AInputStream to implement CanUnbuffer
> ---
>
> Key: HADOOP-14747
> URL: https://issues.apache.org/jira/browse/HADOOP-14747
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> HBase relies on FileSystems implementing {{CanUnbuffer.unbuffer()}} to force 
> input streams to free up remote connections (HBASE-9393). This works for 
> HDFS, but not elsewhere.
> S3A input stream can implement {{CanUnbuffer.unbuffer()}} by closing the 
> input stream and relying on lazy seek to reopen it on demand.
> Needs
> * Contract specification of unbuffer. As in "who added a new feature to 
> filesystems but forgot to mention what it should do?"
> * Contract test for filesystems which declare their support. 
> * S3AInputStream to call {{closeStream()}} on a call to {{unbuffer()}}.
> * Test case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #690: HADOOP-14747: S3AInputStream to implement CanUnbuffer

2019-04-12 Thread GitBox
steveloughran commented on issue #690: HADOOP-14747: S3AInputStream to 
implement CanUnbuffer
URL: https://github.com/apache/hadoop/pull/690#issuecomment-482763776
 
 
   reviewed, no issues, liked the docs & new tests. +1,  committed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on issue #716: HADOOP-16205 Backporting ABFS driver 
from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#issuecomment-482759642
 
 
   -1 as is
   
   I'm not worried about the abfs changes itself: that's what the testing is 
for. I've been more reviewing the changes outside the hadoop-azure module as 
that's where the risk of side effects is hightest. 
   
   * There's some stuff on Erasure Coding (see Ls) which seems spurious and is 
mostly commented out anyway.
   * `AbstractContractAppendTest` needs HADOOP-15744 applied to it
   * The move away from `intercept()` in the tests means that they aren't 
actually stressing things (e.g AbstractContractConcatTest)
   
   I can see why you've tried to move off intercept(), with java 8 as the 
target language, but the method can be used with Callable<> operations so does 
work with java 7. In fact, we started adding it for java 7 
[HADOOP-13716](https://issues.apache.org/jira/browse/HADOOP-13716).
   If you move back to the intercept calls, you can use intellij to convert the 
l-expressions to Callables at the click of a mouse (one caveat: needs to think 
the language is java 8 still, so you have to do it before changing the language 
version to java 7)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275095875
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFileSerialization.java
 ##
 @@ -49,6 +49,7 @@ public void tearDown() throws Exception {
 fs.close();
   }
 
+  @SuppressWarnings("deprecation")
 
 Review comment:
   I'd leave ths out (or make a PR for trunk on its own and 
TestSequenceFileSync)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-04-12 Thread GitBox
hanishakoneru commented on issue #703: HDDS-1371. Download RocksDB checkpoint 
from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#issuecomment-482758497
 
 
   Thank you @arp7 for the review. Addressed your review comments and added a 
unit test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275093750
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java
 ##
 @@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.util.UUID;
+
+import org.junit.Assert;
+import org.junit.Ignore;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import org.apache.hadoop.fs.FileSystem;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsRestOperationException;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
+import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
+
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_ACCOUNT_KEY;
+/*
 
 Review comment:
   remove the commented out import now you've replaced its use in the test case


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275093149
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestLs.java
 ##
 @@ -84,6 +84,7 @@ public void processOptionsNone() throws IOException {
 assertFalse(ls.isOrderSize());
 assertFalse(ls.isOrderTime());
 assertFalse(ls.isUseAtime());
+assertFalse(ls.isDisplayECPolicy());
 
 Review comment:
   This is the EC policy patch again. I don't see why abfs needs it at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275092795
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
 ##
 @@ -272,38 +275,22 @@ public void testListFilesNoDir() throws Throwable {
 }
   }
 
-  @Test
+  @Test (expected = FileNotFoundException.class)
   public void testLocatedStatusNoDir() throws Throwable {
 describe("test the LocatedStatus call on a path which is not present");
-try {
-  RemoteIterator iterator
-  = getFileSystem().listLocatedStatus(path("missing"));
-  fail("Expected an exception, got an iterator: " + iterator);
-} catch (FileNotFoundException expected) {
-  // expected
-}
+getFileSystem().listLocatedStatus(path("missing"));
 
 Review comment:
   I'd prefer retaining the intercept() code of trunk. I know java 7 hates 
l-expressions, but we've done the backport elsewhere: see 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase for some examples. 
IntellJ IDEA will actually do the conversion from an lambda expression to a 
Callable for you if you ask it nicely


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275092799
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
 ##
 @@ -272,38 +275,22 @@ public void testListFilesNoDir() throws Throwable {
 }
   }
 
-  @Test
+  @Test (expected = FileNotFoundException.class)
   public void testLocatedStatusNoDir() throws Throwable {
 describe("test the LocatedStatus call on a path which is not present");
-try {
-  RemoteIterator iterator
-  = getFileSystem().listLocatedStatus(path("missing"));
-  fail("Expected an exception, got an iterator: " + iterator);
-} catch (FileNotFoundException expected) {
-  // expected
-}
+getFileSystem().listLocatedStatus(path("missing"));
 
 Review comment:
   I'd prefer retaining the intercept() code of trunk. I know java 7 hates 
l-expressions, but we've done the backport elsewhere: see 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase for some examples. 
IntellJ IDEA will actually do the conversion from an lambda expression to a 
Callable for you if you ask it nicely


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275092799
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java
 ##
 @@ -272,38 +275,22 @@ public void testListFilesNoDir() throws Throwable {
 }
   }
 
-  @Test
+  @Test (expected = FileNotFoundException.class)
   public void testLocatedStatusNoDir() throws Throwable {
 describe("test the LocatedStatus call on a path which is not present");
-try {
-  RemoteIterator iterator
-  = getFileSystem().listLocatedStatus(path("missing"));
-  fail("Expected an exception, got an iterator: " + iterator);
-} catch (FileNotFoundException expected) {
-  // expected
-}
+getFileSystem().listLocatedStatus(path("missing"));
 
 Review comment:
   I'd prefer retaining the intercept() code of trunk. I know java 7 hates 
l-expressions, but we've done the backport elsewhere: see 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase for some examples. 
IntellJ IDEA will actually do the conversion from an lambda expression to a 
Callable for you if you ask it nicely


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275092059
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
 ##
 @@ -149,4 +150,31 @@ public void testRenameFileBeingAppended() throws 
Throwable {
  dataset.length);
 ContractTestUtils.compareByteArrays(dataset, bytes, dataset.length);
   }
+
 
 Review comment:
   HADOOP-15744 needs to be applied to this branch; else HDFS tests will break.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275091519
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
 ##
 @@ -117,13 +122,9 @@ public void testBuilderAppendToExistingFile() throws 
Throwable {
   @Test
   public void testAppendMissingTarget() throws Throwable {
 try {
-  FSDataOutputStream out = getFileSystem().append(target);
-  //got here: trouble
-  out.close();
-  fail("expected a failure");
-} catch (Exception e) {
-  //expected
-  handleExpectedException(e);
+  getFileSystem().append(target).close();
+} catch (Exception ex) {
+  handleExpectedException(ex);
 
 Review comment:
   again, test is no longer a test of the behavior of any filesystem


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275091436
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
 ##
 @@ -75,14 +83,11 @@ public void testBuilderAppendToEmptyFile() throws 
Throwable {
 
   @Test
   public void testAppendNonexistentFile() throws Throwable {
+//expected
 try {
-  FSDataOutputStream out = getFileSystem().append(target);
-  //got here: trouble
-  out.close();
-  fail("expected a failure");
-} catch (Exception e) {
-  //expected
-  handleExpectedException(e);
+  getFileSystem().append(target).close();
+} catch (Exception ex) {
 
 Review comment:
   this removes the entire failure path of the test: The assertion that 
appending to a nonexistent file raises an error. Why did you make this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275091139
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
 ##
 @@ -245,9 +263,27 @@ protected void processPath(PathData item) throws 
IOException {
   return;
 }
 FileStatus stat = item.stat;
+
 
 Review comment:
   this is cherry picking in and then commenting out something which seems 
unrelated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275090911
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
 ##
 @@ -201,6 +201,15 @@ public FsPermission getPermission() {
 return permission;
   }
 
+  /**
 
 Review comment:
   comes in as part of HADOOP-14223. Is it needed? As this is such a core class 
I'm nervous about doing changes here except consistently with branch-3


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r275090486
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
 ##
 @@ -415,6 +415,7 @@ Options:
 * -S: Sort output by file size.
 * -r: Reverse the sort order.
 * -u: Use access time rather than modification time for display and sorting.  
+* -e: Display the erasure coding policy of files and directories only.
 
 Review comment:
   this is from https://issues.apache.org/jira/browse/HDFS-11647 ; unless the 
whole patch is to be backported, it should be left out


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16247) NPE in FsUrlConnection

2019-04-12 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HADOOP-16247:

Attachment: HADOOP-16247-001.patch
Status: Patch Available  (was: Open)

> NPE in FsUrlConnection
> --
>
> Key: HADOOP-16247
> URL: https://issues.apache.org/jira/browse/HADOOP-16247
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.1.2
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: HADOOP-16247-001.patch
>
>
> FsUrlConnection doesn't handle relativePath correctly after the change 
> [HADOOP-15217|https://issues.apache.org/jira/browse/HADOOP-15217]
> {code}
> Exception in thread "main" java.lang.NullPointerException
>  at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:385)
>  at org.apache.hadoop.fs.Path.isAbsolute(Path.java:395)
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:87)
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:636)
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930)
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>  at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>  at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146)
>  at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:347)
>  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899)
>  at org.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:62)
>  at 
> org.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:71)
>  at java.net.URL.openStream(URL.java:1045)
>  at UrlProblem.testRelativePath(UrlProblem.java:33)
>  at UrlProblem.main(UrlProblem.java:19)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816722#comment-16816722
 ] 

Gabor Bota commented on HADOOP-16205:
-

I've checked it with another account and it's working.

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Esfandiar Manii
>Assignee: Yuan Gao
>Priority: Major
> Attachments: HADOOP-16205-branch-2-001.patch, 
> HADOOP-16205-branch-2-002.patch
>
>
> Back porting ABFS driver from trunk to 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-12 Thread GitBox
bharatviswa504 commented on a change in pull request #737: HDDS-1198. Rename 
chill mode to safe mode. Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737#discussion_r275082943
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
 ##
 @@ -23,12 +23,12 @@
 import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ListPipelineRequestProto;
 import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ListPipelineResponseProto;
 import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ClosePipelineRequestProto;
-import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ForceExitChillModeRequestProto;
-import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ForceExitChillModeResponseProto;
 
 Review comment:
   Line length greater than 80.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #732: HDDS-1387. ConcurrentModificationException in TestMiniChaosOzoneCluster

2019-04-12 Thread GitBox
arp7 merged pull request #732: HDDS-1387. ConcurrentModificationException in 
TestMiniChaosOzoneCluster
URL: https://github.com/apache/hadoop/pull/732
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16158) DistCp to supports checksum validation when copy blocks in parallel

2019-04-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16158:

Affects Version/s: 3.0.3
   3.1.2

> DistCp to supports checksum validation when copy blocks in parallel
> ---
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16158) DistCp to supports checksum validation when copy blocks in parallel

2019-04-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816703#comment-16816703
 ] 

Steve Loughran commented on HADOOP-16158:
-

Production code looks good -no concerns there.


TestCopyCommitter.java


* can you add the new imports in the org.apache bit of the import section. 
Normally the org.apache.* block should have an empty line between it and the 
others, but I see this test already breaks that -just add the new imports 
inside the hadoop ones so it doesn't get any more confused.

* L420: should just catch IOException
* L423: rethrow the exception instead of raising a failure but losing the root 
cause.

* L435: Nice bit of work in {{createSrcAndWorkFilesWithDifferentChecksum}}. Can 
you add some javadocs explaining how it does this for our successors to 
understand. It took me a moment to work out, and I know (vaguely) how checksums 
work.

Now that {{compareFileLengthsAndChecksums}} is isolated to a static method, can 
you add test which checks what it does in isolation, again with 
{{createSrcAndWorkFilesWithDifferentChecksum}}. This will isolate regressions. 

FYI For backporting, distcp <= Hadoop 3.1- uses commons-logging, so that log 
statement will also need to change for those versions, even before you worry 
about branch-2

> DistCp to supports checksum validation when copy blocks in parallel
> ---
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 opened a new pull request #737: HDDS-1198. Rename chill mode to safe mode. Contributed by Siddharth Wagle.

2019-04-12 Thread GitBox
arp7 opened a new pull request #737: HDDS-1198. Rename chill mode to safe mode. 
Contributed by Siddharth Wagle.
URL: https://github.com/apache/hadoop/pull/737
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-04-12 Thread GitBox
ben-roling commented on issue #606: HADOOP-16190. S3A copyFile operation to 
include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#issuecomment-482735699
 
 
   Yep, no arguments here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16158) DistCp to supports checksum validation when copy blocks in parallel

2019-04-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16158:

Summary: DistCp to supports checksum validation when copy blocks in 
parallel  (was: DistCp supports checksum validation when copy blocks in 
parallel)

> DistCp to supports checksum validation when copy blocks in parallel
> ---
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-04-12 Thread GitBox
steveloughran commented on issue #606: HADOOP-16190. S3A copyFile operation to 
include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#issuecomment-482727455
 
 
   bq. . There we only apply constraints indicated by the 
fs.s3a.change.detection configuration whereas here you apply constraints 
regardless of that config. You probably already realize this, but just wanted 
to be sure. Still, I'm not bothered enough by the inconsistency to say you 
shouldn't go ahead with it if you like.
   
   I couldn't see way of turning not making this mandatory, and at the same 
time, why anyone wouldn't want a single copy to be of a single file, not a 
mixture of blocks. If I'd known that this problem existed and could be stopped, 
I'd had have this patch in a long time ago. I'll probably backport all the way 
to hadoop 2.8+ for that reason -which is not something I was planning for the 
input stream code. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-04-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816654#comment-16816654
 ] 

Hadoop QA commented on HADOOP-16245:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 21 unchanged - 0 fixed = 22 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12965749/HADOOP-16245.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ef29bc40103c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0c1fec3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16148/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16148/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoo

[GitHub] [hadoop] hanishakoneru commented on issue #724: HDDS-1376. Datanode exits while executing client command when scmId is null

2019-04-12 Thread GitBox
hanishakoneru commented on issue #724: HDDS-1376. Datanode exits while 
executing client command when scmId is null
URL: https://github.com/apache/hadoop/pull/724#issuecomment-482720420
 
 
   Fixed checkstyle and one related unit test failure. The other unit test 
failures are not related and pass locally.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16252) Use configurable dynamo table name prefix in S3Guard tests

2019-04-12 Thread Ben Roling (JIRA)
Ben Roling created HADOOP-16252:
---

 Summary: Use configurable dynamo table name prefix in S3Guard tests
 Key: HADOOP-16252
 URL: https://issues.apache.org/jira/browse/HADOOP-16252
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Ben Roling


Table names are hardcoded into tests for S3Guard with DynamoDB.  This makes it 
awkward to set up a least-privilege type AWS IAM user or role that can 
successfully execute the full test suite.  You either have to know all the 
specific hardcoded table names and give the user Dynamo read/write access to 
those by name or just give blanket read/write access to all Dynamo tables in 
the account.

I propose the tests use a configuration property to specify a prefix for the 
table names used.  Then the full test suite can be run by a user that is given 
read/write access to all tables with names starting with the configured prefix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #735: ContainerStateMap cannot find container while allocating blocks.

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #735: ContainerStateMap cannot find container 
while allocating blocks.
URL: https://github.com/apache/hadoop/pull/735#issuecomment-482716700
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1058 | trunk passed |
   | +1 | compile | 40 | trunk passed |
   | +1 | checkstyle | 16 | trunk passed |
   | +1 | mvnsite | 28 | trunk passed |
   | +1 | shadedclient | 653 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 39 | trunk passed |
   | +1 | javadoc | 26 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 707 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 18 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 93 | server-scm in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2955 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-735/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/735 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 4412a9f24568 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c1fec3 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-735/1/testReport/ |
   | Max. process+thread count | 468 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-735/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-04-12 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816633#comment-16816633
 ] 

Gabor Bota commented on HADOOP-16205:
-

Running the tests with {{mvn -Dparallel-tests-abfs -DtestsThreadCount=8 clean 
verify}} I got the following error:
{noformat}
[ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 6.397 s 
<<< FAILURE! - in org.apache.hadoop.fs.azurebfs.ITestGetNameSpaceEnabled
[ERROR] 
testNonXNSAccount(org.apache.hadoop.fs.azurebfs.ITestGetNameSpaceEnabled)  Time 
elapsed: 6.396 s  <<< FAILURE!
java.lang.AssertionError: Expecting getIsNamespaceEnabled() return false
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at 
org.apache.hadoop.fs.azurebfs.ITestGetNameSpaceEnabled.testNonXNSAccount(ITestGetNameSpaceEnabled.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Esfandiar Manii
>Assignee: Yuan Gao
>Priority: Major
> Attachments: HADOOP-16205-branch-2-001.patch, 
> HADOOP-16205-branch-2-002.patch
>
>
> Back porting ABFS driver from trunk to 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #724: HDDS-1376. Datanode exits while executing client command when scmId is null

2019-04-12 Thread GitBox
bharatviswa504 commented on a change in pull request #724: HDDS-1376. Datanode 
exits while executing client command when scmId is null
URL: https://github.com/apache/hadoop/pull/724#discussion_r275044004
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerLocationUtil.java
 ##
 @@ -78,7 +78,6 @@ private static String getBaseContainerLocation(String 
hddsVolumeDir,
  String scmId,
  long containerId) {
 Preconditions.checkNotNull(hddsVolumeDir, "Base Directory cannot be null");
-Preconditions.checkNotNull(scmId, "scmUuid cannot be null");
 
 Review comment:
   Same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #724: HDDS-1376. Datanode exits while executing client command when scmId is null

2019-04-12 Thread GitBox
bharatviswa504 commented on a change in pull request #724: HDDS-1376. Datanode 
exits while executing client command when scmId is null
URL: https://github.com/apache/hadoop/pull/724#discussion_r275043940
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
 ##
 @@ -107,7 +107,6 @@ public void create(VolumeSet volumeSet, 
VolumeChoosingPolicy
 Preconditions.checkNotNull(volumeChoosingPolicy, "VolumeChoosingPolicy " +
 "cannot be null");
 Preconditions.checkNotNull(volumeSet, "VolumeSet cannot be null");
-Preconditions.checkNotNull(scmId, "scmId cannot be null");
 
 Review comment:
   Why do we need to remove these checks?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hextriclosan opened a new pull request #736: YARN-9469. Fix typo in YarnConfiguration.

2019-04-12 Thread GitBox
hextriclosan opened a new pull request #736: YARN-9469. Fix typo in 
YarnConfiguration.
URL: https://github.com/apache/hadoop/pull/736
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #735: ContainerStateMap cannot find container while allocating blocks.

2019-04-12 Thread GitBox
bharatviswa504 opened a new pull request #735: ContainerStateMap cannot find 
container while allocating blocks.
URL: https://github.com/apache/hadoop/pull/735
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hextriclosan closed pull request #734: YARN-9469. Fix typo in YarnConfiguration.

2019-04-12 Thread GitBox
hextriclosan closed pull request #734: YARN-9469. Fix typo in YarnConfiguration.
URL: https://github.com/apache/hadoop/pull/734
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #734: YARN-9469. Fix typo in YarnConfiguration.

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #734: YARN-9469. Fix typo in YarnConfiguration.
URL: https://github.com/apache/hadoop/pull/734#issuecomment-482700354
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/734 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/734 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-734/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-04-12 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16245:
-
Status: Patch Available  (was: In Progress)

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.0.3, 3.1.1, 2.7.6, 2.8.4, 2.9.1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-04-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816588#comment-16816588
 ] 

Erik Krogen commented on HADOOP-16245:
--

Attaching a v000 patch for initial review while we start doing some testing 
internally to make sure we didn't break anything.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-04-12 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16245:
-
Attachment: HADOOP-16245.000.patch

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-04-12 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16245 started by Erik Krogen.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #714: HDDS-1406. Avoid usage of commonPool in 
RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-482682418
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 63 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1109 | trunk passed |
   | +1 | compile | 31 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 777 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 40 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | +1 | checkstyle | 14 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 809 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 45 | the patch passed |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 182 | server-scm in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3380 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/714 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 4c374b190af4 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c1fec3 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/3/testReport/ |
   | Max. process+thread count | 394 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #714: HDDS-1406. Avoid usage of commonPool in 
RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-482682282
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1066 | trunk passed |
   | +1 | compile | 49 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 32 | trunk passed |
   | +1 | shadedclient | 665 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | trunk passed |
   | +1 | javadoc | 28 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 748 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 48 | the patch passed |
   | +1 | javadoc | 22 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 98 | server-scm in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3077 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/714 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 98a0fe5d23aa 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c1fec3 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/4/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-04-12 Thread GitBox
bharatviswa504 commented on issue #714: HDDS-1406. Avoid usage of commonPool in 
RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-482663923
 
 
   Thank You @lokeshj1703  for the review.
   Fixed review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-04-12 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r275004441
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -51,6 +52,15 @@
   private static final Logger LOG =
   LoggerFactory.getLogger(RatisPipelineUtils.class);
 
+  // Set parallelism at 3, as now in Ratis we create 1 and 3 node pipelines.
+  private static int parallelismForPool =
+  (Runtime.getRuntime().availableProcessors() > 3) ? 3 :
+  Runtime.getRuntime().availableProcessors();
+
+  private static ForkJoinPool pool = new ForkJoinPool(parallelismForPool);
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #727: HDDS-1425. Ozone compose files are not compatible with the latest docker-compose

2019-04-12 Thread GitBox
bharatviswa504 merged pull request #727: HDDS-1425. Ozone compose files are not 
compatible with the latest docker-compose
URL: https://github.com/apache/hadoop/pull/727
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #727: HDDS-1425. Ozone compose files are not compatible with the latest docker-compose

2019-04-12 Thread GitBox
bharatviswa504 commented on issue #727: HDDS-1425. Ozone compose files are not 
compatible with the latest docker-compose
URL: https://github.com/apache/hadoop/pull/727#issuecomment-482643678
 
 
   +1 LGTM.
   Thank You @elek for the fix.
   I will commit this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16243) Change Log Level to trace in NetUtils.java

2019-04-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-16243:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

This has been committed to trunk by [~arpitagarwal]

Thank You [~candychencan] for the fix and [~arpitagarwal] for the review and 
commit.

> Change Log Level to trace in NetUtils.java
> --
>
> Key: HADOOP-16243
> URL: https://issues.apache.org/jira/browse/HADOOP-16243
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HDDS-1407.001.patch
>
>
> When there is no String Constructor for the exception, we Log a Warn Message, 
> and rethrow the exception. We can change the Log level to TRACE/DEBUG.
>  
> {code:java}
> private static  T wrapWithMessage(
> T exception, String msg) throws T {
> Class clazz = exception.getClass();
> try {
> Constructor ctor = clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T)(t.initCause(exception));
> } catch (Throwable e) {
> LOG.warn("Unable to wrap exception of type {}: it has no (String) "
> + "constructor", clazz, e);
> throw exception;
> }
> }{code}
> {code:java}
> 2019-04-09 18:07:27,824 WARN ipc.Client 
> (Client.java:handleConnectionFailure(938)) - Interrupted while trying for 
> connection
> 2019-04-09 18:07:27,826 WARN net.NetUtils 
> (NetUtils.java:wrapWithMessage(834)) - Unable to wrap exception of type class 
> java.nio.channels.ClosedByInterruptException: it has no (String) constructor
> java.lang.NoSuchMethodException: 
> java.nio.channels.ClosedByInterruptException.(java.lang.String)
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.getConstructor(Class.java:1825)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy84.register(Unknown Source)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.register(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:160)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:120)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #612: HDDS-1285. Implement actions need to be taken after chill mode exit w…

2019-04-12 Thread GitBox
bharatviswa504 merged pull request #612: HDDS-1285. Implement actions need to 
be taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #612: HDDS-1285. Implement actions need to be taken after chill mode exit w…

2019-04-12 Thread GitBox
bharatviswa504 commented on issue #612: HDDS-1285. Implement actions need to be 
taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612#issuecomment-482641947
 
 
   Thank You @nandakumar131 for the review.
   I have fixed the checkstyle issue, will commit this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16227) Upgrade checkstyle to 8.19

2019-04-12 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816351#comment-16816351
 ] 

Masatake Iwasaki commented on HADOOP-16227:
---

+1. I got same number of checkstyle errors before and after I applied the patch 
for all sub projects.

> Upgrade checkstyle to 8.19
> --
>
> Key: HADOOP-16227
> URL: https://issues.apache.org/jira/browse/HADOOP-16227
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16227.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hextriclosan opened a new pull request #734: YARN-9469. Fix typo in YarnConfiguration.

2019-04-12 Thread GitBox
hextriclosan opened a new pull request #734: YARN-9469. Fix typo in 
YarnConfiguration.
URL: https://github.com/apache/hadoop/pull/734
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-04-12 Thread GitBox
ben-roling commented on issue #606: HADOOP-16190. S3A copyFile operation to 
include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#issuecomment-482597747
 
 
   Given you don't have a problem with what I'm doing in #646 going over top of 
it later, I have no problem with this going in as-is or being patched into 
older versions.
   
   I will also note though that this seems somewhat inconsistent with changes 
already in S3AInputStream from HADOOP-15625.  There we only apply constraints 
indicated by the fs.s3a.change.detection configuration whereas here you apply 
constraints regardless of that config.  You probably already realize this, but 
just wanted to be sure.  Still, I'm not bothered enough by the inconsistency to 
say you shouldn't go ahead with it if you like.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #726: HDDS-1424. Support multi-container robot test execution

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #726: HDDS-1424. Support multi-container robot 
test execution
URL: https://github.com/apache/hadoop/pull/726#issuecomment-482587051
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1035 | trunk passed |
   | +1 | compile | 28 | trunk passed |
   | +1 | mvnsite | 27 | trunk passed |
   | +1 | shadedclient | 650 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 22 | dist in the patch failed. |
   | +1 | compile | 18 | the patch passed |
   | +1 | javac | 18 | the patch passed |
   | +1 | mvnsite | 21 | the patch passed |
   | -1 | shellcheck | 2 | The patch generated 2 new + 0 unchanged - 1 fixed = 
2 total (was 1) |
   | +1 | shelldocs | 14 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 697 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 21 | dist in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 2753 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/726 |
   | Optional Tests |  dupname  asflicense  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  mvnsite  unit  shadedclient  xml  |
   | uname | Linux 3ac617fd7f4f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abace70 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-726/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #726: HDDS-1424. Support multi-container robot test execution

2019-04-12 Thread GitBox
hadoop-yetus commented on a change in pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r274920281
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-all.sh
 ##
 @@ -0,0 +1,27 @@
+#!/usr/bin/env bash
+
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )
+ALL_RESULT_DIR="$SCRIPT_DIR/result"
+
+mkdir -p "$ALL_RESULT_DIR"
+rm "$ALL_RESULT_DIR/*"
+
+RESULT=0
+IFS=$'\n'
+# shellcheck disable=SC2044
 
 Review comment:
   shellcheck:20: note: Double quote to prevent globbing and word splitting. 
[SC2086]
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #726: HDDS-1424. Support multi-container robot test execution

2019-04-12 Thread GitBox
hadoop-yetus commented on a change in pull request #726: HDDS-1424. Support 
multi-container robot test execution
URL: https://github.com/apache/hadoop/pull/726#discussion_r274920290
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/test-single.sh
 ##
 @@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Single test executor, can start a single robot test in any running container.
+#
+
+
+COMPOSE_DIR="$PWD"
+export COMPOSE_DIR
+
+if [[ ! -f "$COMPOSE_DIR/docker-compose.yaml" ]]; then
+echo "docker-compose.yaml is missing from the current dir. Please run this 
command from a docker-compose environment."
+exit 1
+fi
+if (( $# != 2 )); then
+cat << EOF
+   Single test executor
+
+   Usage:
+
+ ../test-single.sh  
+
+container: Name of the running docker-compose container 
(docker-compose.yaml is required in the current directory)
+
+robot_test: name of the robot test or directory relative to the 
smoketest dir.
+
+
+
+EOF
+
+fi
+
+# shellcheck source=testlib.sh
 
 Review comment:
   shellcheck:1: note: Not following: testlib.sh: openBinaryFile: does not 
exist (No such file or directory) [SC1091]
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #733: HDDS-1284. Adjust default values of pipline recovery for more resilient service restart

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #733: HDDS-1284. Adjust default values of 
pipline recovery for more resilient service restart
URL: https://github.com/apache/hadoop/pull/733#issuecomment-482584703
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 851 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1306 | trunk passed |
   | +1 | compile | 66 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 43 | trunk passed |
   | +1 | shadedclient | 736 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 80 | trunk passed |
   | +1 | javadoc | 39 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 44 | the patch passed |
   | +1 | compile | 33 | the patch passed |
   | +1 | javac | 33 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 36 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 81 | the patch passed |
   | +1 | javadoc | 35 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 85 | common in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 4315 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-733/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/733 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux 630e39838fbf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abace70 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-733/1/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-733/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #732: HDDS-1387. ConcurrentModificationException in TestMiniChaosOzoneCluster

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #732: HDDS-1387. 
ConcurrentModificationException in TestMiniChaosOzoneCluster
URL: https://github.com/apache/hadoop/pull/732#issuecomment-482576790
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 94 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1098 | trunk passed |
   | +1 | compile | 43 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 30 | trunk passed |
   | +1 | shadedclient | 718 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | trunk passed |
   | +1 | javadoc | 21 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 733 | patch has errors when building and testing our 
client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | the patch passed |
   | +1 | javadoc | 13 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 19 | integration-test in the patch failed. |
   | 0 | asflicense | 28 | ASF License check generated no output? |
   | | | 3021 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-732/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/732 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux ff8b87dbbde9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abace70 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-732/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-732/1/testReport/ |
   | Max. process+thread count | 398 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-732/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #707: HDDS-1402. Remove unused ScmBlockLocationProtocol from ObjectStoreHandler

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #707: HDDS-1402. Remove unused 
ScmBlockLocationProtocol from ObjectStoreHandler
URL: https://github.com/apache/hadoop/pull/707#issuecomment-482571528
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 22 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1181 | trunk passed |
   | +1 | compile | 50 | trunk passed |
   | +1 | checkstyle | 19 | trunk passed |
   | +1 | mvnsite | 28 | trunk passed |
   | +1 | shadedclient | 869 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 43 | trunk passed |
   | +1 | javadoc | 29 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 51 | the patch passed |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 1011 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 49 | the patch passed |
   | +1 | javadoc | 24 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 32 | objectstore-service in the patch passed. |
   | +1 | asflicense | 65 | The patch does not generate ASF License warnings. |
   | | | 3654 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-707/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/707 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b4a77c63e2e9 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abace70 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-707/3/testReport/ |
   | Max. process+thread count | 295 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/objectstore-service U: 
hadoop-ozone/objectstore-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-707/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #727: HDDS-1425. Ozone compose files are not compatible with the latest docker-compose

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #727: HDDS-1425. Ozone compose files are not 
compatible with the latest docker-compose
URL: https://github.com/apache/hadoop/pull/727#issuecomment-482566188
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1128 | trunk passed |
   | +1 | compile | 69 | trunk passed |
   | +1 | mvnsite | 26 | trunk passed |
   | +1 | shadedclient | 639 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 25 | dist in the patch failed. |
   | +1 | compile | 33 | the patch passed |
   | +1 | javac | 33 | the patch passed |
   | +1 | mvnsite | 35 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 16 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 707 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 15 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 19 | dist in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 2920 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-727/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/727 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  shellcheck  shelldocs  |
   | uname | Linux f25d9821457b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abace70 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-727/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-727/1/testReport/ |
   | Max. process+thread count | 457 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-727/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #733: HDDS-1284. Adjust default values of pipline recovery for more resilient service restart

2019-04-12 Thread GitBox
elek opened a new pull request #733: HDDS-1284. Adjust default values of 
pipline recovery for more resilient service restart
URL: https://github.com/apache/hadoop/pull/733
 
 
   As of now we have a following algorithm to handle node failures:
   
   1. In case of a missing node the leader of the pipline or the scm can 
detected the missing heartbeats.
   2. SCM will start to close the pipeline (CLOSING state) and try to close the 
containers with the remaining nodes in the pipeline
   3. After 5 minutes the pipeline will be destroyed (CLOSED) and a new 
pipeline can be created from the healthy nodes (one node can be part only one 
pipwline in the same time).
   
   While this algorithm can work well with a big cluster it doesn't provide 
very good usability on small clusters:
   
   Use case1:
   
   Given 3 nodes, in case of a service restart, if the restart takes more than 
90s, the pipline will be moved to the CLOSING state. For the next 5 minutes 
(ozone.scm.pipeline.destroy.timeout) the container will remain in the CLOSING 
state. As there are no more nodes and we can't assign the same node to two 
different pipeline, the cluster will be unavailable for 5 minutes.
   
   Use case2:
   
   Given 90 nodes and 30 pipelines where all the pipelines are spread across 3 
racks. Let's stop one rack. As all the pipelines are affected, all the 
pipelines will be moved to the CLOSING state. We have no free nodes, therefore 
we need to wait for 5 minutes to write any data to the cluster.
   
   These problems can be solved in multiple ways:
   
   1.) Instead of waiting 5 minutes, destroy the pipeline when all the 
containers are reported to be closed. (Most of the time it's enough, but some 
container report can be missing)
   2.) Support multi-raft and open a pipeline as soon as we have enough nodes 
(even if the nodes already have a CLOSING pipelines).
   
   Both the options require more work on the pipeline management side. For 
0.4.0 we can adjust the following parameters to get better user experience:
   
   {code}
 
   ozone.scm.pipeline.destroy.timeout
   60s
   OZONE, SCM, PIPELINE
   
 Once a pipeline is closed, SCM should wait for the above configured 
time
 before destroying a pipeline.
   
   
 
   ozone.scm.stale.node.interval
   90s
   OZONE, MANAGEMENT
   
 The interval for stale node flagging. Please
 see ozone.scm.heartbeat.thread.interval before changing this value.
   
 
{code}
   
   First of all, we can be more optimistic and mark node to stale only after 5 
mins instead of 90s. 5 mins should be enough most of the time to recover the 
nodes.
   
   Second: we can decrease the time of ozone.scm.pipeline.destroy.timeout. 
Ideally the close command is sent by the scm to the datanode with a HB. Between 
two HB we have enough time to close all the containers via ratis. With the next 
HB, datanode can report the successful datanode. (If the containers can be 
closed the scm can manage the QUASI_CLOSED containers)
   
   We need to wait 29 seconds (worst case) for the next HB, and 29+30 seconds 
for the confirmation. --> 66 seconds seems to be a safe choice (assuming that 6 
seconds is enough to process the report about the successful closing)
   
   See: https://issues.apache.org/jira/browse/HDDS-1284


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #732: HDDS-1387. ConcurrentModificationException in TestMiniChaosOzoneCluster

2019-04-12 Thread GitBox
elek opened a new pull request #732: HDDS-1387. ConcurrentModificationException 
in TestMiniChaosOzoneCluster
URL: https://github.com/apache/hadoop/pull/732
 
 
   TestMiniChaosOzoneCluster is failing with the below exception
   {noformat}
   [ERROR] org.apache.hadoop.ozone.TestMiniChaosOzoneCluster  Time elapsed: 
265.679 s  <<< ERROR!
   java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
at java.util.ArrayList$Itr.next(ArrayList.java:859)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.stop(MiniOzoneClusterImpl.java:350)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.shutdown(MiniOzoneClusterImpl.java:325)
at 
org.apache.hadoop.ozone.MiniOzoneChaosCluster.shutdown(MiniOzoneChaosCluster.java:130)
at 
org.apache.hadoop.ozone.TestMiniChaosOzoneCluster.shutdown(TestMiniChaosOzoneCluster.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
   {noformat}
   
   See: https://issues.apache.org/jira/browse/HDDS-1387


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #707: HDDS-1402. Remove unused ScmBlockLocationProtocol from ObjectStoreHandler

2019-04-12 Thread GitBox
elek commented on issue #707: HDDS-1402. Remove unused ScmBlockLocationProtocol 
from ObjectStoreHandler
URL: https://github.com/apache/hadoop/pull/707#issuecomment-482542353
 
 
   Thanks the review @nandakumar131 Fixed the checkstyle issue (and rebased)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on a change in pull request #727: HDDS-1425. Ozone compose files are not compatible with the latest docker-compose

2019-04-12 Thread GitBox
elek commented on a change in pull request #727: HDDS-1425. Ozone compose files 
are not compatible with the latest docker-compose
URL: https://github.com/apache/hadoop/pull/727#discussion_r274867534
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config
 ##
 @@ -60,7 +60,8 @@ LOG4J2.PROPERTIES_appender.console.layout.type=PatternLayout
 LOG4J2.PROPERTIES_appender.console.layout.pattern=%d{DEFAULT} | %-5level | 
%c{1} | %msg | %throwable{3} %n
 LOG4J2.PROPERTIES_appender.rolling.type=RollingFile
 LOG4J2.PROPERTIES_appender.rolling.name=RollingFile
-LOG4J2.PROPERTIES_appender.rolling.fileName 
=${sys:hadoop.log.dir}/om-audit-${hostName}.log
+LOG4J2.PROPERTIES_appender.rolling.fileName
+=${sys:hadoop.log.dir}/om-audit-${hostName}.log
 
 Review comment:
   oh, thanks. You saved me. Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #723: HDDS-1420. Tracing exception in DataNode HddsDispatcher. Contributed …

2019-04-12 Thread GitBox
elek closed pull request #723: HDDS-1420. Tracing exception in DataNode 
HddsDispatcher. Contributed …
URL: https://github.com/apache/hadoop/pull/723
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #723: HDDS-1420. Tracing exception in DataNode HddsDispatcher. Contributed …

2019-04-12 Thread GitBox
elek commented on issue #723: HDDS-1420. Tracing exception in DataNode 
HddsDispatcher. Contributed …
URL: https://github.com/apache/hadoop/pull/723#issuecomment-482537186
 
 
   Created: https://issues.apache.org/jira/browse/HDDS-1435
   
   Both the tests have similar failures in the log (NotLeaderException...)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #728: HDDS-1419. Fix shellcheck errors in start-chaos.sh

2019-04-12 Thread GitBox
elek closed pull request #728: HDDS-1419. Fix shellcheck errors in 
start-chaos.sh
URL: https://github.com/apache/hadoop/pull/728
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #722: HDDS-1421. Avoid unnecessary object allocations in TracingUtil. Contr…

2019-04-12 Thread GitBox
elek closed pull request #722: HDDS-1421. Avoid unnecessary object allocations 
in TracingUtil. Contr…
URL: https://github.com/apache/hadoop/pull/722
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #722: HDDS-1421. Avoid unnecessary object allocations in TracingUtil. Contr…

2019-04-12 Thread GitBox
elek commented on issue #722: HDDS-1421. Avoid unnecessary object allocations 
in TracingUtil. Contr…
URL: https://github.com/apache/hadoop/pull/722#issuecomment-482515514
 
 
   Merged to the trunk. Thanks @arp7 the contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #707: HDDS-1402. Remove unused ScmBlockLocationProtocol from ObjectStoreHandler

2019-04-12 Thread GitBox
nandakumar131 commented on issue #707: HDDS-1402. Remove unused 
ScmBlockLocationProtocol from ObjectStoreHandler
URL: https://github.com/apache/hadoop/pull/707#issuecomment-482511243
 
 
   @elek The changes look good. We have a checkstyle issue which is related. 
Test failure is not related to this change and it is tracked in HDDS-1413.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #730: HDDS-1426. Minor logging improvements for MiniOzoneChaosCluster. Cont…

2019-04-12 Thread GitBox
elek closed pull request #730: HDDS-1426. Minor logging improvements for 
MiniOzoneChaosCluster. Cont…
URL: https://github.com/apache/hadoop/pull/730
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #713: HDDS-1192. Support -conf command line argument in GenericCli

2019-04-12 Thread GitBox
hadoop-yetus commented on issue #713: HDDS-1192. Support -conf command line 
argument in GenericCli
URL: https://github.com/apache/hadoop/pull/713#issuecomment-482508977
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 60 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1017 | trunk passed |
   | +1 | compile | 954 | trunk passed |
   | +1 | checkstyle | 183 | trunk passed |
   | +1 | mvnsite | 218 | trunk passed |
   | +1 | shadedclient | 1095 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 219 | trunk passed |
   | +1 | javadoc | 170 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 155 | the patch passed |
   | +1 | compile | 926 | the patch passed |
   | +1 | javac | 926 | the patch passed |
   | +1 | checkstyle | 192 | the patch passed |
   | +1 | mvnsite | 210 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 631 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 251 | the patch passed |
   | +1 | javadoc | 170 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 76 | common in the patch passed. |
   | -1 | unit | 80 | container-service in the patch failed. |
   | +1 | unit | 32 | tools in the patch passed. |
   | -1 | unit | 1201 | integration-test in the patch failed. |
   | +1 | unit | 60 | ozone-manager in the patch passed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7867 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-713/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/713 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 8ff0e5f4c43b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bbdbc7a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-713/3/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-713/3/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-713/3/testReport/ |
   | Max. process+thread count | 4418 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/tools hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . 
|
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-713/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 merged pull request #691: [HDDS-1363] ozone.metadata.dirs doesn't pick multiple dirs

2019-04-12 Thread GitBox
nandakumar131 merged pull request #691: [HDDS-1363] ozone.metadata.dirs doesn't 
pick multiple dirs
URL: https://github.com/apache/hadoop/pull/691
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #711: HDDS-1368. Cleanup old ReplicationManager code from SCM.

2019-04-12 Thread GitBox
nandakumar131 commented on issue #711: HDDS-1368. Cleanup old 
ReplicationManager code from SCM.
URL: https://github.com/apache/hadoop/pull/711#issuecomment-482465167
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #612: HDDS-1285. Implement actions need to be taken after chill mode exit w…

2019-04-12 Thread GitBox
nandakumar131 commented on a change in pull request #612: HDDS-1285. Implement 
actions need to be taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612#discussion_r274783720
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/chillmode/TestSCMChillModeWithPipelineRules.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.chillmode;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.ReplicationManager;
+import 
org.apache.hadoop.hdds.scm.container.replication.ReplicationActivityStatus;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.List;
+import java.util.concurrent.TimeoutException;
+
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests SCM Chill mode with pipeline rules.
+ */
+
+public class TestSCMChillModeWithPipelineRules {
+
+  private static MiniOzoneCluster cluster;
+  private OzoneConfiguration conf = new OzoneConfiguration();
+  private PipelineManager pipelineManager;
+  private MiniOzoneCluster.Builder clusterBuilder;
+
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+  public void setup(int numDatanodes) throws Exception {
+conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+temporaryFolder.newFolder().toString());
+conf.setBoolean(
+HddsConfigKeys.HDDS_SCM_CHILLMODE_PIPELINE_AVAILABILITY_CHECK,
+true);
+conf.set(HddsConfigKeys.HDDS_SCM_WAIT_TIME_AFTER_CHILL_MODE_EXIT, "10s");
+conf.set(ScmConfigKeys.OZONE_SCM_PIPELINE_CREATION_INTERVAL, "10s");
+clusterBuilder = MiniOzoneCluster.newBuilder(conf)
+.setNumDatanodes(numDatanodes)
+.setHbInterval(1000)
+.setHbProcessorInterval(1000);
+
+cluster = clusterBuilder.build();
+cluster.waitForClusterToBeReady();
+StorageContainerManager scm = cluster.getStorageContainerManager();
+pipelineManager = scm.getPipelineManager();
+  }
+
+
+  @Test
+  public void testScmChillMode() throws Exception {
+
+int datanodeCount = 6;
+setup(datanodeCount);
+
+waitForRatis3NodePipelines(datanodeCount/3);
+waitForRatis1NodePipelines(datanodeCount);
+
+int totalPipelineCount = datanodeCount + (datanodeCount/3);
+
+//Cluster is started successfully
+cluster.stop();
+
+cluster.restartOzoneManager();
+cluster.restartStorageContainerManager(false);
+
+pipelineManager = 
cluster.getStorageContainerManager().getPipelineManager();
+List pipelineList =
+pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS,
+HddsProtos.ReplicationFactor.THREE);
+
+
+pipelineList.get(0).getNodes().forEach(datanodeDetails -> {
+  try {
+cluster.restartHddsDatanode(datanodeDetails, false);
+  } catch (Exception ex) {
+fail("Datanode restart failed");
+  }
+});
+
+
+SCMChillModeManager scmChillModeManager =
+cluster.getStorageContainerManager().getScmChillModeManager();
+
+
+// Ceil(0.1 * 2) is 1, as one pipeline is healthy healthy pipeline rule is
+// satisfied
+
+GenericTestUtils.waitFor(() ->
+scmChillModeManager.getHealthyPipelineChillModeRule()
+.validate(), 1000, 6);
+
+// As Ceil(0.9 * 2) is 2, and from second pipeline no datanodes's are
+// reported this rule is not met yet.
+GenericTestUtils.waitFor(() ->
+!scmChillModeManager.getOneReplicaPipelineChillModeRule()
+.validate(), 1000, 6);
+
+Assert.assertTrue(cluster.getStorageContainerMa