[jira] [Commented] (HADOOP-16059) Use SASL Factories Cache to Improve Performance

2019-04-24 Thread Vinayakumar B (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825718#comment-16825718
 ] 

Vinayakumar B commented on HADOOP-16059:


Changes are pretty straight forward.

Similar changes already exist for SaslRpcServer, and same has been re-used. So 
no impact to any functionality IMO.

+1, patch v4 LGTM.

Will wait for few more days before commit.

[~jojochuang], Please take a look at profiler screenshots.

> Use SASL Factories Cache to Improve Performance
> ---
>
> Key: HADOOP-16059
> URL: https://issues.apache.org/jira/browse/HADOOP-16059
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: After-Dn.png, After-Read.png, After-Server.png, 
> After-write.png, Before-DN.png, Before-Read.png, Before-Server.png, 
> Before-Write.png, HADOOP-16059-01.patch, HADOOP-16059-02.patch, 
> HADOOP-16059-02.patch, HADOOP-16059-03.patch, HADOOP-16059-04.patch
>
>
> SASL Client factories can be cached and SASL Server Factories and SASL Client 
> Factories can be together extended at SaslParticipant  to improve performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #769: HDDS-1456. Stop the datanode, when any datanode statemachine state is…

2019-04-24 Thread GitBox
bharatviswa504 commented on a change in pull request #769: HDDS-1456. Stop the 
datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#discussion_r278392038
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
 ##
 @@ -195,6 +199,14 @@ private void start() throws IOException {
 LOG.error("Unable to finish the execution.", e);
   }
 }
+
+// If we have got some exception in stateMachine we set the state to
+// shutdown to stop the stateMachine thread. Along with this we should
+// also stop the datanode.
+if (context.getShutdownStateMachine()) {
 
 Review comment:
   If this check is true means, the state is transitioned to shutdown, we call 
stop() and terminate. So, datanode process is terminated. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #769: HDDS-1456. Stop the datanode, when any datanode statemachine state is…

2019-04-24 Thread GitBox
bharatviswa504 commented on a change in pull request #769: HDDS-1456. Stop the 
datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#discussion_r278391703
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
 ##
 @@ -73,6 +73,7 @@
   private final Queue containerActions;
   private final Queue pipelineActions;
   private DatanodeStateMachine.DatanodeStates state;
+  private boolean shutdownStateMachine = false;
 
 Review comment:
   This additional check is added to do shutdown only when the state is changed 
to shutdown state by executing any of the datanode state tasks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-04-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825688#comment-16825688
 ] 

Hadoop QA commented on HADOOP-16205:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 87 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
50s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
25s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
34s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
8s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m  8s{color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 4 new + 1440 
unchanged - 3 fixed = 1444 total (was 1443) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 11s{color} 
| {color:red} root-jdk1.8.0_191 with JDK v1.8.0_191 generated 4 new + 1342 
unchanged - 3 fixed = 1346 total (was 1345) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 13s{color} | {color:orange} root: The patch generated 23 new + 125 unchanged 
- 5 fixed = 148 total (was 130) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red}  0m  8s{color} | 
{color:red} The patch has 1 ill-formed XML file(s). {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  8m  
4s{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 
60 unchanged - 0 fixed = 61 total (was 60) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 30s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} 

[GitHub] [hadoop] hunshenshi opened a new pull request #770: HDFS-14456:HAState#prepareToEnterState neednt a lock

2019-04-24 Thread GitBox
hunshenshi opened a new pull request #770: 
HDFS-14456:HAState#prepareToEnterState neednt a lock
URL: https://github.com/apache/hadoop/pull/770
 
 
   prepareToEnterState in HAState is called without the context being locked.
   
   But in NameNode#NameNode, prepareToEnterState is after haContext.writeLock()
   

   ``` java
   try {
 haContext.writeLock();
 state.prepareToEnterState(haContext);
 state.enterState(haContext);
   } finally {
 haContext.writeUnlock();
   }
   ```
   
   Is it OK?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-04-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825662#comment-16825662
 ] 

Hadoop QA commented on HADOOP-16266:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 21s{color} | {color:orange} root: The patch generated 8 new + 307 unchanged 
- 6 fixed = 315 total (was 313) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 38s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 6s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}230m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestIPCServerResponder |
|   | hadoop.ipc.TestProtoBufRpc |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16266 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966955/HADOOP-16266.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d40eae0215d6 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[GitHub] [hadoop] hadoop-yetus commented on issue #746: HDDS-1442. add spark container to ozonesecure-mr compose files. Contributed by Ajay Kumar.

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #746: HDDS-1442. add spark container to 
ozonesecure-mr compose files. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/746#issuecomment-486494217
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 532 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1150 | trunk passed |
   | +1 | compile | 77 | trunk passed |
   | +1 | mvnsite | 30 | trunk passed |
   | +1 | shadedclient | 684 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | +1 | compile | 20 | the patch passed |
   | +1 | javac | 20 | the patch passed |
   | +1 | mvnsite | 24 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 14 | The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 23 | dist in the patch passed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3518 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-746/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/746 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  yamllint  shellcheck  shelldocs  |
   | uname | Linux 59831dd396ef 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a703dae |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-746/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-746/2/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-746/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #769: HDDS-1456. Stop the datanode, when any datanode statemachine state is…

2019-04-24 Thread GitBox
arp7 commented on a change in pull request #769: HDDS-1456. Stop the datanode, 
when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#discussion_r278374738
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
 ##
 @@ -93,7 +95,9 @@
* enabled
*/
   public DatanodeStateMachine(DatanodeDetails datanodeDetails,
-  Configuration conf, CertificateClient certClient) throws IOException {
+  Configuration conf, CertificateClient certClient,
 
 Review comment:
   It also feels unfortunate that we have to pass this back-reference. 
StateMachine should not really know about HddsDataNodeService. Now we have a 
circular dependency.
   
   Can we instead pass a callback via method reference that should be invoked 
when termination is necessary?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #769: HDDS-1456. Stop the datanode, when any datanode statemachine state is…

2019-04-24 Thread GitBox
arp7 commented on a change in pull request #769: HDDS-1456. Stop the datanode, 
when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#discussion_r278373345
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
 ##
 @@ -73,6 +73,7 @@
   private final Queue containerActions;
   private final Queue pipelineActions;
   private DatanodeStateMachine.DatanodeStates state;
+  private boolean shutdownStateMachine = false;
 
 Review comment:
   Can we avoid this extra flag? Instead can `getShutdownStateMachine` just 
check whether `this.state == DatanodeStateMachine.DatanodeStates.SHUTDOWN`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #769: HDDS-1456. Stop the datanode, when any datanode statemachine state is…

2019-04-24 Thread GitBox
arp7 commented on a change in pull request #769: HDDS-1456. Stop the datanode, 
when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#discussion_r278373630
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
 ##
 @@ -195,6 +199,14 @@ private void start() throws IOException {
 LOG.error("Unable to finish the execution.", e);
   }
 }
+
+// If we have got some exception in stateMachine we set the state to
+// shutdown to stop the stateMachine thread. Along with this we should
+// also stop the datanode.
+if (context.getShutdownStateMachine()) {
 
 Review comment:
   What happens if we transition to shutdown state after this check? Is process 
shutdown done then?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #725: HDDS-1422. Exception during DataNode shutdown. Contributed by Arpit A…

2019-04-24 Thread GitBox
arp7 commented on issue #725: HDDS-1422. Exception during DataNode shutdown. 
Contributed by Arpit A…
URL: https://github.com/apache/hadoop/pull/725#issuecomment-486486914
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-04-24 Thread GitBox
arp7 commented on a change in pull request #714: HDDS-1406. Avoid usage of 
commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278371736
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +164,28 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
 
 Review comment:
   What about RejectedExecutionException. we should probably catch that also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-04-24 Thread GitBox
arp7 commented on a change in pull request #714: HDDS-1406. Avoid usage of 
commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278372087
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +164,28 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
 
 Review comment:
   We should also not swallow the ExecutionException and InterruptedException. 
Instead convert them to something like PipelineCreationException which extends 
IOException. Then createPipeline can catch it and handle it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-04-24 Thread GitBox
arp7 commented on a change in pull request #714: HDDS-1406. Avoid usage of 
commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278372274
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
 ##
 @@ -1010,6 +1011,9 @@ public void stop() {
 } catch (Exception ex) {
   LOG.error("SCM Metadata store stop failed", ex);
 }
+
+// shutdown RatisPipelineUtils pool.
+RatisPipelineUtils.POOL.shutdown();
 
 Review comment:
   shutdown will wait for previously submitted tasks to complete. You can 
probably call shutdownNow since we don't care about the completion on shutdown.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16263) Update BUILDING.txt with macOS native build instructions

2019-04-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825620#comment-16825620
 ] 

Hadoop QA commented on HADOOP-16263:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16263 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966961/HADOOP-16263.001.patch
 |
| Optional Tests |  dupname  asflicense  |
| uname | Linux a881e0f9cb41 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon Dec 
10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a703dae |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 313 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16189/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update BUILDING.txt with macOS native build instructions
> 
>
> Key: HADOOP-16263
> URL: https://issues.apache.org/jira/browse/HADOOP-16263
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HADOOP-16263.001.patch
>
>
> I recently tried to compile Hadoop native on a Mac and found a few catches, 
> involving fixing some YARN native compiling issues (YARN-8622, YARN-9487).
> Also, need to specify OpenSSL (brewed) header include dir when building 
> native with maven on a Mac. Should update BUILDING.txt for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-486479013
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 54 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 318 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1228 | trunk passed |
   | +1 | compile | 986 | trunk passed |
   | +1 | checkstyle | 151 | trunk passed |
   | +1 | mvnsite | 123 | trunk passed |
   | +1 | shadedclient | 1059 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 156 | trunk passed |
   | +1 | javadoc | 95 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 76 | the patch passed |
   | +1 | compile | 931 | the patch passed |
   | -1 | javac | 931 | root generated 2 new + 1481 unchanged - 0 fixed = 1483 
total (was 1481) |
   | -0 | checkstyle | 183 | root: The patch generated 25 new + 40 unchanged - 
0 fixed = 65 total (was 40) |
   | +1 | mvnsite | 123 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 60 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   | -1 | javadoc | 31 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 548 | hadoop-common in the patch passed. |
   | +1 | unit | 294 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7398 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AFileSystem.directoryAllocator; locked 75% of time  
Unsynchronized access at S3AFileSystem.java:75% of time  Unsynchronized access 
at S3AFileSystem.java:[line 2373] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/654 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux 457afd609ae2 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a703dae |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/10/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/10/artifact/out/diff-checkstyle-root.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/10/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/10/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/10/testReport/ |
   | Max. process+thread count | 1471 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/10/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16241) S3AInputStream PositionReadable should perform ranged read on dedicated stream

2019-04-24 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825610#comment-16825610
 ] 

Sahil Takiar edited comment on HADOOP-16241 at 4/25/19 12:48 AM:
-

Thanks for taking a look a this [~ste...@apache.org], always appreciate any 
feedback. I'm out of town at the moment, so won't be able to respond in full, 
but I have the Impala traces. I set the S3A log level to DEBUG, which includes 
logging of the stream stats when the file stream is closed.

The file {{Impala-web_returns-scan-logs.txt}} contains the impalad logs a full 
table scan of a 10 TB Parquet dataset on S3. The table is partitioned and is 
58.1 GB. The actual files are anywhere from less than 1 MB to ~20 MB.

The file {{Impala-store_returns-scan-logs.txt}} contains the impalad logs a 
full table scan of a 10 TB Parquet dataset on S3. The table is partitioned and 
is 167 GB. The actual files are anywhere from less than 1 MB to ~85 MB.

Some background on how Impala scans data from S3. Impala doesn't perform any 
backwards seeks (the only time it ever seeks is after it opens a file), which 
is why {{BackwardSeekOperations}} is always 0. All scans (including footer 
scans) are done on a dedicated file handle and then the file handle is closed.

I think the fundamental issue with how Impala currently scans data is since 
fadvise = NORMAL by default, and no backwards seeks are performed, the switch 
to fadvise = RANDOM never happens for Parquet. So each column chunk scan 
essentially requires opening an S3 file with the full content range requested, 
which eventually causes the HTTP connection to get reset when the file handle 
is closed.

I don't have the flamegraphs on hand, but can re-produce them. IIRC they show 
that out of the box, when Impala is reading data, it spends most of its time 
doing SSL connection establishment. Probably because the connection almost 
always gets reset when the file handle is closed.


was (Author: stakiar):
Thanks for taking a look a this [~ste...@apache.org], always appreciate any 
feedback. I'm out of town at the moment, so won't be able to respond in full, 
but I have the Impala traces. I set the S3A log level to DEBUG, which includes 
logging of the stream stats when the file stream is closed.

The file {{Impala-web_returns-scan-logs.txt}} contains the impalad logs a full 
table scan of a 10 TB Parquet dataset on S3. The table is partitioned and is 
58.1 GB. The actual files are anywhere from less than 1 MB to ~20 MB.

The file {{Impala-store_returns-scan-logs.txt}} contains the impalad logs a 
full table scan of a 10 TB Parquet dataset on S3. The table is partitioned and 
is 167 GB. The actual files are anywhere from less than 1 MB to ~85 MB.

Some background on how Impala scans data from S3. Impala doesn't perform any 
backwards seeks (the only time it ever seeks is after it opens a file), which 
is why {{BackwardSeekOperations}} is always 0. All scans (including footer 
scans) are done on a dedicated file handle and then the file handle is closed.

I think the fundamental issue with how Impala currently scans data is since 
fadvise = NORMAL by default, and no backwards seeks are performed, the switch 
to fadvise = RANDOM never happens for Parquet. So each column chunk scan 
essentially requires opening an S3 file with the full content range requested, 
which eventually causes the HTTP connection to get reset when the file handle 
is closed.

> S3AInputStream PositionReadable should perform ranged read on dedicated 
> stream 
> ---
>
> Key: HADOOP-16241
> URL: https://issues.apache.org/jira/browse/HADOOP-16241
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Impala-TPCDS-scans.zip
>
>
> The current implementation of {{PositionReadable}} in {{S3AInputStream}} is 
> pretty close to the default implementation in {{FsInputStream}}.
> This JIRA proposes overriding the {{read(long position, byte[] buffer, int 
> offset, int length)}} method and re-implementing the {{readFully(long 
> position, byte[] buffer, int offset, int length)}} method in S3A.
> The new implementation would perform a "ranged read" on a dedicated object 
> stream (rather than the shared one). Prototypes have shown this to bring a 
> considerable performance improvement to readers who are only interested in 
> reading a random chunk of the file at a time (e.g. Impala, although I would 
> assume HBase would benefit from this as well).
> Setting {{fs.s3a.experimental.input.fadvise}} to {{RANDOM}} is helpful for 
> clients that rely on pread, but has a few drawbacks:
>  * Unless the client explicitly sets fadvise to RANDOM, they 

[jira] [Updated] (HADOOP-16241) S3AInputStream PositionReadable should perform ranged read on dedicated stream

2019-04-24 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HADOOP-16241:
--
Attachment: Impala-TPCDS-scans.zip

> S3AInputStream PositionReadable should perform ranged read on dedicated 
> stream 
> ---
>
> Key: HADOOP-16241
> URL: https://issues.apache.org/jira/browse/HADOOP-16241
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Impala-TPCDS-scans.zip
>
>
> The current implementation of {{PositionReadable}} in {{S3AInputStream}} is 
> pretty close to the default implementation in {{FsInputStream}}.
> This JIRA proposes overriding the {{read(long position, byte[] buffer, int 
> offset, int length)}} method and re-implementing the {{readFully(long 
> position, byte[] buffer, int offset, int length)}} method in S3A.
> The new implementation would perform a "ranged read" on a dedicated object 
> stream (rather than the shared one). Prototypes have shown this to bring a 
> considerable performance improvement to readers who are only interested in 
> reading a random chunk of the file at a time (e.g. Impala, although I would 
> assume HBase would benefit from this as well).
> Setting {{fs.s3a.experimental.input.fadvise}} to {{RANDOM}} is helpful for 
> clients that rely on pread, but has a few drawbacks:
>  * Unless the client explicitly sets fadvise to RANDOM, they will get at 
> least one connection reset when the backwards seek is issued (after which 
> fadvise automatically switches to RANDOM)
>  * Data is only read in 64 kb chunks, so for a large read, several GET 
> requests must be issued to S3 to fetch the data; while the 64 kb chunk value 
> is configurable, it is hard to set a reasonable value for variable length 
> preads
>  * If the readahead value is too big, closing the input stream can take 
> considerable time because the stream has to be drained of data before it can 
> be closed
> The new implementation of {{PositionReadable}} would issue a 
> {{GetObjectRequest}} with the range specified by {{position}} and the size of 
> the given buffer. The data would be read from the {{S3ObjectInputStream}} and 
> then closed at the end of the method. This stream would be independent of the 
> {{wrappedStream}} currently maintained by S3A.
> This brings the following benefits:
>  * The {{PositionedReadable}} methods can be thread-safe without a 
> {{synchronized}} block, which allows clients to concurrently call pread 
> methods on the same {{S3AInputStream}} instance
>  * preads will request all the data at once rather than requesting it in 
> chunks via the readahead logic
>  * Avoids performing potentially expensive seeks when performing preads



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16241) S3AInputStream PositionReadable should perform ranged read on dedicated stream

2019-04-24 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825610#comment-16825610
 ] 

Sahil Takiar commented on HADOOP-16241:
---

Thanks for taking a look a this [~ste...@apache.org], always appreciate any 
feedback. I'm out of town at the moment, so won't be able to respond in full, 
but I have the Impala traces. I set the S3A log level to DEBUG, which includes 
logging of the stream stats when the file stream is closed.

The file {{Impala-web_returns-scan-logs.txt}} contains the impalad logs a full 
table scan of a 10 TB Parquet dataset on S3. The table is partitioned and is 
58.1 GB. The actual files are anywhere from less than 1 MB to ~20 MB.

The file {{Impala-store_returns-scan-logs.txt}} contains the impalad logs a 
full table scan of a 10 TB Parquet dataset on S3. The table is partitioned and 
is 167 GB. The actual files are anywhere from less than 1 MB to ~85 MB.

Some background on how Impala scans data from S3. Impala doesn't perform any 
backwards seeks (the only time it ever seeks is after it opens a file), which 
is why {{BackwardSeekOperations}} is always 0. All scans (including footer 
scans) are done on a dedicated file handle and then the file handle is closed.

I think the fundamental issue with how Impala currently scans data is since 
fadvise = NORMAL by default, and no backwards seeks are performed, the switch 
to fadvise = RANDOM never happens for Parquet. So each column chunk scan 
essentially requires opening an S3 file with the full content range requested, 
which eventually causes the HTTP connection to get reset when the file handle 
is closed.

> S3AInputStream PositionReadable should perform ranged read on dedicated 
> stream 
> ---
>
> Key: HADOOP-16241
> URL: https://issues.apache.org/jira/browse/HADOOP-16241
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> The current implementation of {{PositionReadable}} in {{S3AInputStream}} is 
> pretty close to the default implementation in {{FsInputStream}}.
> This JIRA proposes overriding the {{read(long position, byte[] buffer, int 
> offset, int length)}} method and re-implementing the {{readFully(long 
> position, byte[] buffer, int offset, int length)}} method in S3A.
> The new implementation would perform a "ranged read" on a dedicated object 
> stream (rather than the shared one). Prototypes have shown this to bring a 
> considerable performance improvement to readers who are only interested in 
> reading a random chunk of the file at a time (e.g. Impala, although I would 
> assume HBase would benefit from this as well).
> Setting {{fs.s3a.experimental.input.fadvise}} to {{RANDOM}} is helpful for 
> clients that rely on pread, but has a few drawbacks:
>  * Unless the client explicitly sets fadvise to RANDOM, they will get at 
> least one connection reset when the backwards seek is issued (after which 
> fadvise automatically switches to RANDOM)
>  * Data is only read in 64 kb chunks, so for a large read, several GET 
> requests must be issued to S3 to fetch the data; while the 64 kb chunk value 
> is configurable, it is hard to set a reasonable value for variable length 
> preads
>  * If the readahead value is too big, closing the input stream can take 
> considerable time because the stream has to be drained of data before it can 
> be closed
> The new implementation of {{PositionReadable}} would issue a 
> {{GetObjectRequest}} with the range specified by {{position}} and the size of 
> the given buffer. The data would be read from the {{S3ObjectInputStream}} and 
> then closed at the end of the method. This stream would be independent of the 
> {{wrappedStream}} currently maintained by S3A.
> This brings the following benefits:
>  * The {{PositionedReadable}} methods can be thread-safe without a 
> {{synchronized}} block, which allows clients to concurrently call pread 
> methods on the same {{S3AInputStream}} instance
>  * preads will request all the data at once rather than requesting it in 
> chunks via the readahead logic
>  * Avoids performing potentially expensive seeks when performing preads



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16263) Update BUILDING.txt with macOS native build instructions

2019-04-24 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16263:

Attachment: HADOOP-16263.001.patch
Status: Patch Available  (was: Open)

Uploaded diff rev 001. Tested on trunk with a macOS 10.14.4 clean install.

> Update BUILDING.txt with macOS native build instructions
> 
>
> Key: HADOOP-16263
> URL: https://issues.apache.org/jira/browse/HADOOP-16263
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HADOOP-16263.001.patch
>
>
> I recently tried to compile Hadoop native on a Mac and found a few catches, 
> involving fixing some YARN native compiling issues (YARN-8622, YARN-9487).
> Also, need to specify OpenSSL (brewed) header include dir when building 
> native with maven on a Mac. Should update BUILDING.txt for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #753: HDDS-1403. KeyOutputStream writes fails after max retries while writing to a closed container

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #753: HDDS-1403. KeyOutputStream writes fails 
after max retries while writing to a closed container
URL: https://github.com/apache/hadoop/pull/753#issuecomment-486476360
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1136 | trunk passed |
   | +1 | compile | 966 | trunk passed |
   | +1 | checkstyle | 143 | trunk passed |
   | +1 | mvnsite | 185 | trunk passed |
   | +1 | shadedclient | 1109 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 231 | trunk passed |
   | +1 | javadoc | 148 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | -1 | mvninstall | 18 | client in the patch failed. |
   | -1 | mvninstall | 18 | objectstore-service in the patch failed. |
   | +1 | compile | 894 | the patch passed |
   | +1 | javac | 894 | the patch passed |
   | +1 | checkstyle | 184 | the patch passed |
   | -1 | mvnsite | 33 | objectstore-service in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 27 | objectstore-service in the patch failed. |
   | +1 | javadoc | 147 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 88 | common in the patch passed. |
   | +1 | unit | 34 | client in the patch passed. |
   | +1 | unit | 45 | common in the patch passed. |
   | -1 | unit | 30 | objectstore-service in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6734 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/753 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux 9bb732da53b9 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a703dae |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/2/artifact/out/patch-mvninstall-hadoop-ozone_client.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/2/artifact/out/patch-mvninstall-hadoop-ozone_objectstore-service.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/2/artifact/out/patch-mvnsite-hadoop-ozone_objectstore-service.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/2/artifact/out/patch-findbugs-hadoop-ozone_objectstore-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/2/artifact/out/patch-unit-hadoop-ozone_objectstore-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/2/testReport/ |
   | Max. process+thread count | 336 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/objectstore-service U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16259) Distcp to set S3 Storage Class

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825576#comment-16825576
 ] 

Steve Loughran commented on HADOOP-16259:
-

Hadoop trunk builds. That we know, at least with the version of Java 
8-something and maven (3.6.0) I am using.

a clean build "mvn -T 1C clean install -DskipTests" on the root hadoop-trunk 
directory should build within a few minutes (that first time will be the 
longest one). Don't try using your IDE here as it will probably be out of its 
depth. You'll also need a old copy of protobuf's protoc on the classpath.

JIRAs aren't the way to get help getting a build to compile. Subscribe to the 
common-...@hadoop.apache.org mailing list and ask there -more people will help. 

> Distcp to set S3 Storage Class
> --
>
> Key: HADOOP-16259
> URL: https://issues.apache.org/jira/browse/HADOOP-16259
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: hadoop-aws, tools/distcp
>Affects Versions: 2.8.4
>Reporter: Prakash Gopalsamy
>Priority: Minor
> Attachments: ENHANCE_HADOOP_DISTCP_FOR_CUSTOM_S3_STORAGE_CLASS.docx
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hadoop distcp implementation doesn’t have properties to override Storage 
> class while transferring data to Amazon S3 storage. Hadoop distcp doesn’t set 
> any storage class while transferring data to Amazon S3 storage. Due to this 
> all the objects moved from cluster to S3 using Hadoop Distcp are been stored 
> in the default storage class “STANDARD”. By providing a new feature to 
> override the default S3 storage class through configuration properties will 
> be helpful to upload objects in other storage classes. I have come up with a 
> design to implement this feature in a design document and uploaded the same 
> in the JIRA. Kindly review and let me know for your suggestions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16235) ABFS VersionedFileStatus to declare that it isEncrypted()

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825575#comment-16825575
 ] 

Steve Loughran commented on HADOOP-16235:
-

[~iwasakims] the issue here is not whether the real state is exposed or not, 
but that the Yarn NM distributed cache will look for posix permissions on 
resource to localise, and if the permissions say "world readable" all the way 
up the directory tree and !FileStatus.isEncrypted() then the NM will try to 
download it to the shared cache.

if your user has submitted work where the credentials to access the store are 
in delegation tokens (HADOOP-1606) then the download will fail, as the NM will 
be running with the credentials and privileges of the VM, not the user.

This is about lying to the node manager to get it to not cache files.

now, there is a cost here: things like bit spark.tar.gz files won't be cached. 
Which is why for ABFS we should think about "maybe there's a way to fix the 
Distributed Cache" here to control what's cached across users and what isn't 
-as the posix-level checks are obsolete. At the very least, if it can't 
download to the cache, it should just leave it to the user localization to sort 
it out

> ABFS VersionedFileStatus to declare that it isEncrypted()
> -
>
> Key: HADOOP-16235
> URL: https://issues.apache.org/jira/browse/HADOOP-16235
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16235.001.patch
>
>
> Files in ABFS are always encrypted; have VersionedFileStatus.isEncrypted() 
> declare this, presumably just by changing the flag passed to the superclass's 
> constructor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825573#comment-16825573
 ] 

Steve Loughran commented on HADOOP-16269:
-

bq. Not sure why the checkstyle complains public access for 
"Parameterized.Parameter" only in the new test, this is also used in other 
tests. 

sometimes it overreacts

> ABFS: add listFileStatus with StartFrom
> ---
>
> Key: HADOOP-16269
> URL: https://issues.apache.org/jira/browse/HADOOP-16269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, 
> HADOOP-16269-003.patch
>
>
> Adding a ListFileStatus in a path from a entry name in lexical order.
> This is added to AzureBlobFileSystemStore and won't be exposed to FS level 
> api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-04-24 Thread Christopher Gregorian (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825561#comment-16825561
 ] 

Christopher Gregorian commented on HADOOP-16266:


Added a 3rd patch: addresses [~csun]'s comments :) chose to use nanos within 
ProcessingDetails to maintain high precision, but the metrics default to using 
millis to maintain compatibility.

> Add more fine-grained processing time metrics to the RPC layer
> --
>
> Key: HADOOP-16266
> URL: https://issues.apache.org/jira/browse/HADOOP-16266
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Christopher Gregorian
>Assignee: Christopher Gregorian
>Priority: Minor
>  Labels: rpc
> Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch, 
> HADOOP-16266.003.patch
>
>
> Splitting off of HDFS-14403 to track the first part: introduces more 
> fine-grained measuring of how a call's processing time is split up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-04-24 Thread Christopher Gregorian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Gregorian updated HADOOP-16266:
---
Attachment: HADOOP-16266.003.patch

> Add more fine-grained processing time metrics to the RPC layer
> --
>
> Key: HADOOP-16266
> URL: https://issues.apache.org/jira/browse/HADOOP-16266
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Christopher Gregorian
>Assignee: Christopher Gregorian
>Priority: Minor
>  Labels: rpc
> Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch, 
> HADOOP-16266.003.patch
>
>
> Splitting off of HDFS-14403 to track the first part: introduces more 
> fine-grained measuring of how a call's processing time is split up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-04-24 Thread Yuan Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825553#comment-16825553
 ] 

Yuan Gao edited comment on HADOOP-16205 at 4/24/19 10:50 PM:
-

Patch 004:

Thank you [~ste...@apache.org] for reviewing the PR. I addressed review comment 
and updated PR.

Tests:

mvn -T 1C -Dparallel-tests=abfs clean verify

[INFO] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
[WARNING] Tests run: 333, Failures: 0, Errors: 0, Skipped: 21
[WARNING] Tests run: 151, Failures: 0, Errors: 0, Skipped: 15


was (Author: kowon2008):
Patch 004:

Thank you Steve Loughran for reviewing the PR. I addressed review comment and 
updated PR.

Tests:

mvn -T 1C -Dparallel-tests=abfs clean verify

[INFO] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
[WARNING] Tests run: 333, Failures: 0, Errors: 0, Skipped: 21
[WARNING] Tests run: 151, Failures: 0, Errors: 0, Skipped: 15

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Esfandiar Manii
>Assignee: Yuan Gao
>Priority: Major
> Attachments: HADOOP-16205-branch-2-001.patch, 
> HADOOP-16205-branch-2-002.patch, HADOOP-16205-branch-2-003.patch, 
> HADOOP-16205-branch-2-004.patch
>
>
> Back porting ABFS driver from trunk to 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1682#comment-1682
 ] 

Steve Loughran commented on HADOOP-15183:
-

Turns out the current code stores all files being renamed anyway, as the list 
of files to move is built up incrementally but only pushed to S3Guard at the 
end, irrespective of store size. This is a strategy doomed to fail at precisely 
the wrong point in a production application.

Also, in some tests, the time to update the DDB table is the bottleneck on 
deletion performance. 

We need to move to incremental DDB updates along with the deletes. Maybe, even 
with partial delete enabled, we should have each thread go: 
COPY->Update->DELETE->Update. It'd be moving to one DELETE per file renamed, 
but except for small files, that won't be the big expense

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-04-24 Thread Yuan Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825553#comment-16825553
 ] 

Yuan Gao commented on HADOOP-16205:
---

Patch 004:

Thank you Steve Loughran for reviewing the PR. I addressed review comment and 
updated PR.

Tests:

mvn -T 1C -Dparallel-tests=abfs clean verify

[INFO] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
[WARNING] Tests run: 333, Failures: 0, Errors: 0, Skipped: 21
[WARNING] Tests run: 151, Failures: 0, Errors: 0, Skipped: 15

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Esfandiar Manii
>Assignee: Yuan Gao
>Priority: Major
> Attachments: HADOOP-16205-branch-2-001.patch, 
> HADOOP-16205-branch-2-002.patch, HADOOP-16205-branch-2-003.patch, 
> HADOOP-16205-branch-2-004.patch
>
>
> Back porting ABFS driver from trunk to 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-04-24 Thread Yuan Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Gao updated HADOOP-16205:
--
Attachment: HADOOP-16205-branch-2-004.patch

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Esfandiar Manii
>Assignee: Yuan Gao
>Priority: Major
> Attachments: HADOOP-16205-branch-2-001.patch, 
> HADOOP-16205-branch-2-002.patch, HADOOP-16205-branch-2-003.patch, 
> HADOOP-16205-branch-2-004.patch
>
>
> Back porting ABFS driver from trunk to 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] kowon2008 commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-24 Thread GitBox
kowon2008 commented on a change in pull request #716: HADOOP-16205 Backporting 
ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r278345790
 
 

 ##
 File path: hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
 ##
 @@ -1,7 +1,7 @@
 
 http://www.puppycrawl.com/dtds/configuration_1_2.dtd;>
+"-//Checkstyle//DTD Checkstyle Configuration 1.2//EN"
+"https://checkstyle.org/dtds/configuration_1_2.dtd;>
 
 Review comment:
   Ok. I have rebased and removed this commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #769: HDDS-1456. Stop the datanode, when any datanode statemachine state is…

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #769: HDDS-1456. Stop the datanode, when any 
datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#issuecomment-486420491
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 22 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 306 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1040 | trunk passed |
   | +1 | compile | 1018 | trunk passed |
   | +1 | checkstyle | 137 | trunk passed |
   | +1 | mvnsite | 207 | trunk passed |
   | +1 | shadedclient | 1075 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 113 | trunk passed |
   | +1 | javadoc | 108 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 105 | the patch passed |
   | +1 | compile | 966 | the patch passed |
   | +1 | javac | 966 | the patch passed |
   | +1 | checkstyle | 126 | the patch passed |
   | +1 | mvnsite | 111 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 608 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 116 | the patch passed |
   | +1 | javadoc | 70 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 60 | container-service in the patch failed. |
   | -1 | unit | 99 | server-scm in the patch failed. |
   | -1 | unit | 762 | integration-test in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 7046 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.volume.TestVolumeSetDiskChecks |
   |   | hadoop.hdds.scm.container.TestReplicationManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/769 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux fe90552653d6 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a703dae |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/1/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/1/testReport/ |
   | Max. process+thread count | 5309 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests

2019-04-24 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825472#comment-16825472
 ] 

Siyao Meng commented on HADOOP-16264:
-

Thanks for compiling the test class list [~giovanni.fumarola].

TestSFTPFileSystem has been fixed in HADOOP-15783. Thanks [~ajisakaa] for 
linking the jira.
TestCompressorDecompressor.testCompressorDecompressor isn't a JDK 11 specific 
error. It also failed in JDK 8. Possibly related jira: HADOOP-12610

> [JDK11] Track failing Hadoop unit tests
> ---
>
> Key: HADOOP-16264
> URL: https://issues.apache.org/jira/browse/HADOOP-16264
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: test-run1.tgz, test-run2.log, test-run2.log.gz
>
>
> Although there are still a lot of work before we could compile Hadoop with 
> JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run 
> (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment.
> But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and 
> there are a LOT of unit test failures (44 out of 96 maven projects contain at 
> least one unit test failures according to maven reactor summary). This may 
> well indicate some functionalities are actually broken on JDK 11. Some of 
> them already have a jira number. Some of them might have been fixed in 3.2.0. 
> Some of them might share the same root cause.
> By definition, this jira should be part of HADOOP-15338. But the goal of this 
> one is just to keep track of unit test failures and (hopefully) resolve all 
> of them soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests

2019-04-24 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825457#comment-16825457
 ] 

Giovanni Matteo Fumarola commented on HADOOP-16264:
---

Thanks [~smeng]. There are a few classes to tackle:

TestMRJobsWithProfiler
TestTimelineReaderWebServicesHBaseStorage
TestTimelineAuthFilterForV2
TestMetricsInvariantChecker
TestCapacitySchedulerSchedulingRequestUpdate
TestTimelineWebServicesWithSSL
TestAuxServices
TestRouterWebHDFSContractAppend
TestHttpFSFWithSWebhdfsFileSystem
TestKMS
TestLogLevel
TestSnappyCompressorDecompressor
TestCompressorDecompressor
TestSFTPFileSystem

Let's focus on these classes, please check if there were fixes done in trunk.

> [JDK11] Track failing Hadoop unit tests
> ---
>
> Key: HADOOP-16264
> URL: https://issues.apache.org/jira/browse/HADOOP-16264
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: test-run1.tgz, test-run2.log, test-run2.log.gz
>
>
> Although there are still a lot of work before we could compile Hadoop with 
> JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run 
> (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment.
> But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and 
> there are a LOT of unit test failures (44 out of 96 maven projects contain at 
> least one unit test failures according to maven reactor summary). This may 
> well indicate some functionalities are actually broken on JDK 11. Some of 
> them already have a jira number. Some of them might have been fixed in 3.2.0. 
> Some of them might share the same root cause.
> By definition, this jira should be part of HADOOP-15338. But the goal of this 
> one is just to keep track of unit test failures and (hopefully) resolve all 
> of them soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests

2019-04-24 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16264:

Attachment: test-run2.log

> [JDK11] Track failing Hadoop unit tests
> ---
>
> Key: HADOOP-16264
> URL: https://issues.apache.org/jira/browse/HADOOP-16264
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: test-run1.tgz, test-run2.log, test-run2.log.gz
>
>
> Although there are still a lot of work before we could compile Hadoop with 
> JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run 
> (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment.
> But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and 
> there are a LOT of unit test failures (44 out of 96 maven projects contain at 
> least one unit test failures according to maven reactor summary). This may 
> well indicate some functionalities are actually broken on JDK 11. Some of 
> them already have a jira number. Some of them might have been fixed in 3.2.0. 
> Some of them might share the same root cause.
> By definition, this jira should be part of HADOOP-15338. But the goal of this 
> one is just to keep track of unit test failures and (hopefully) resolve all 
> of them soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests

2019-04-24 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825454#comment-16825454
 ] 

Siyao Meng commented on HADOOP-16264:
-

[~giovanni.fumarola] Sure. I just uploaded the result of a new run in  
[^test-run2.log.gz]. After backporting HADOOP-12760 + HADOOP-15775 + 
HADOOP-16016 on top of branch-3.1.2, there are significantly less unit test 
failures already. Only 12 out of 96 maven projects have at least one unit test 
failures now. Here are the results containing the failed/erred unit tests and 
stack traces extracted from the full log:

{code}
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR]   TestSFTPFileSystem.testGetModifyTime:329 expected:<1556082701843> but
was:<1556082701000>
[ERROR]   TestCompressorDecompressor.testCompressorDecompressor:69  Expected to
find 'testCompressorDecompressor error !!!' but got unexpected exception: java.l
ang.NullPointerException
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:
187)
at com.google.common.base.Joiner.toString(Joiner.java:532)
at com.google.common.base.Joiner.appendTo(Joiner.java:124)
at com.google.common.base.Joiner.appendTo(Joiner.java:181)
at com.google.common.base.Joiner.join(Joiner.java:237)
at com.google.common.base.Joiner.join(Joiner.java:226)
at com.google.common.base.Joiner.join(Joiner.java:253)
at org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTes
tStrategy$2.assertCompression(CompressDecompressTester.java:329)
at org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressD
ecompressTester.java:135)
at org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompress
orDecompressor(TestCompressorDecompressor.java:66)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Nativ
e Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Native
MethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(De
legatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(Framework
Method.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCal
lable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMe
thod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMet
hod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRun
ner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRun
ner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provide
r.java:365)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUni
t4Provider.java:273)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4
Provider.java:238)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider
.java:159)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameCla
ssLoader(ForkedBooter.java:384)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(Fork
edBooter.java:345)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.ja
va:126)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:
418)

[ERROR]   TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLi
mit:92  Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !
!!' but got unexpected exception: java.lang.NullPointerException
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:
187)
at com.google.common.base.Joiner.toString(Joiner.java:532)
at com.google.common.base.Joiner.appendTo(Joiner.java:124)
at com.google.common.base.Joiner.appendTo(Joiner.java:181)
at com.google.common.base.Joiner.join(Joiner.java:237)
at com.google.common.base.Joiner.join(Joiner.java:226)
at com.google.common.base.Joiner.join(Joiner.java:253)
at org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTes
tStrategy$2.assertCompression(CompressDecompressTester.java:329)
at 

[GitHub] [hadoop] bharatviswa504 opened a new pull request #769: HDDS-1456. Stop the datanode, when any datanode statemachine state is…

2019-04-24 Thread GitBox
bharatviswa504 opened a new pull request #769: HDDS-1456. Stop the datanode, 
when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769
 
 
   … set to shutdown.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests

2019-04-24 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825434#comment-16825434
 ] 

Giovanni Matteo Fumarola commented on HADOOP-16264:
---

Thanks [~smeng]. Do you mind attaching the list without compress the file?

> [JDK11] Track failing Hadoop unit tests
> ---
>
> Key: HADOOP-16264
> URL: https://issues.apache.org/jira/browse/HADOOP-16264
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: test-run1.tgz, test-run2.log.gz
>
>
> Although there are still a lot of work before we could compile Hadoop with 
> JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run 
> (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment.
> But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and 
> there are a LOT of unit test failures (44 out of 96 maven projects contain at 
> least one unit test failures according to maven reactor summary). This may 
> well indicate some functionalities are actually broken on JDK 11. Some of 
> them already have a jira number. Some of them might have been fixed in 3.2.0. 
> Some of them might share the same root cause.
> By definition, this jira should be part of HADOOP-15338. But the goal of this 
> one is just to keep track of unit test failures and (hopefully) resolve all 
> of them soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #768: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #768: HADOOP-16269. ABFS: add listFileStatus 
with StartFrom.
URL: https://github.com/apache/hadoop/pull/768#issuecomment-486357295
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1036 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 670 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 38 | trunk passed |
   | +1 | javadoc | 20 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 25 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | -0 | checkstyle | 14 | hadoop-tools/hadoop-azure: The patch generated 4 
new + 2 unchanged - 0 fixed = 6 total (was 2) |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 675 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 77 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 2901 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-768/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/768 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux e1f8804a2a8c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a703dae |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-768/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-768/1/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-768/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16264) [JDK11] Track failing Hadoop unit tests

2019-04-24 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16264:

Attachment: test-run2.log.gz

> [JDK11] Track failing Hadoop unit tests
> ---
>
> Key: HADOOP-16264
> URL: https://issues.apache.org/jira/browse/HADOOP-16264
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.2
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: test-run1.tgz, test-run2.log.gz
>
>
> Although there are still a lot of work before we could compile Hadoop with 
> JDK 11 (HADOOP-15338), it is possible to compile Hadoop with JDK 8 and run 
> (e.g. HDFS NN/DN,YARN NM/RM) on JDK 11 at this moment.
> But after compiling branch-3.1.2 with JDK 8, I ran unit tests with JDK 11 and 
> there are a LOT of unit test failures (44 out of 96 maven projects contain at 
> least one unit test failures according to maven reactor summary). This may 
> well indicate some functionalities are actually broken on JDK 11. Some of 
> them already have a jira number. Some of them might have been fixed in 3.2.0. 
> Some of them might share the same root cause.
> By definition, this jira should be part of HADOOP-15338. But the goal of this 
> one is just to keep track of unit test failures and (hopefully) resolve all 
> of them soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom

2019-04-24 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825352#comment-16825352
 ] 

Da Zhou commented on HADOOP-16269:
--

Thanks, I opened the PR here : https://github.com/apache/hadoop/pull/768
Not sure why the checkstyle complains public access for 
"Parameterized.Parameter" only in the new test, this is also used in other 
tests. If change it to private,  the test won't work.

> ABFS: add listFileStatus with StartFrom
> ---
>
> Key: HADOOP-16269
> URL: https://issues.apache.org/jira/browse/HADOOP-16269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, 
> HADOOP-16269-003.patch
>
>
> Adding a ListFileStatus in a path from a entry name in lexical order.
> This is added to AzureBlobFileSystemStore and won't be exposed to FS level 
> api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16259) Distcp to set S3 Storage Class

2019-04-24 Thread Prakash Gopalsamy (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825350#comment-16825350
 ] 

Prakash Gopalsamy commented on HADOOP-16259:


Thanks for the reply [~kai33]. Got the point. I have mis-intrepreted the maven 
build error. While running the maven build, there is an error from the class 
HadoopKerberosName in the package 'org.apache.hadoop.security' for the variable 
'DEFAULT_MECHANISM'. The actual error is given below
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-common: Compilation failure: Compilation failure:
[ERROR] 
\git\hadoop\hadoop-common-project\hadoop-common\src\main\java\org\apache\hadoop\security\HadoopKerberosName.java:[83,78]
 error: cannot find symbol
[ERROR] symbol:   variable DEFAULT_MECHANISM
[ERROR] location: class HadoopKerberosName

> Distcp to set S3 Storage Class
> --
>
> Key: HADOOP-16259
> URL: https://issues.apache.org/jira/browse/HADOOP-16259
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: hadoop-aws, tools/distcp
>Affects Versions: 2.8.4
>Reporter: Prakash Gopalsamy
>Priority: Minor
> Attachments: ENHANCE_HADOOP_DISTCP_FOR_CUSTOM_S3_STORAGE_CLASS.docx
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hadoop distcp implementation doesn’t have properties to override Storage 
> class while transferring data to Amazon S3 storage. Hadoop distcp doesn’t set 
> any storage class while transferring data to Amazon S3 storage. Due to this 
> all the objects moved from cluster to S3 using Hadoop Distcp are been stored 
> in the default storage class “STANDARD”. By providing a new feature to 
> override the default S3 storage class through configuration properties will 
> be helpful to upload objects in other storage classes. I have come up with a 
> design to implement this feature in a design document and uploaded the same 
> in the JIRA. Kindly review and let me know for your suggestions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16222) Fix new deprecations after guava 27.0 update in trunk

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825348#comment-16825348
 ] 

Hudson commented on HADOOP-16222:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16462 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16462/])
HADOOP-16222. Fix new deprecations after guava 27.0 update in trunk. 
(mackrorysd: rev a703dae25e3c75a4e6086efd4b620ef956e6fe54)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/ServiceTestUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestZKUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ZKUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SemaphoredDelegatingExecutor.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestArrayWritable.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ArrayWritable.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java


> Fix new deprecations after guava 27.0 update in trunk
> -
>
> Key: HADOOP-16222
> URL: https://issues.apache.org/jira/browse/HADOOP-16222
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16222.001.patch, HADOOP-16222.002.patch, 
> HADOOP-16222.003.patch, HADOOP-16222.004.patch
>
>
> *Note*: this can be done after the guava update.
> There are a bunch of new deprecations after the guava update. We need to fix 
> those, because these will be removed after the next guava version (after 27).
> I created a separate jira for this from HADOOP-16210 because jenkins 
> pre-commit test job (yetus) will time-out after 5 hours after running this 
> together. 
> {noformat}
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SemaphoredDelegatingExecutor.java:[110,20]
>  [deprecation] immediateFailedCheckedFuture(X) in Futures has been 
> deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ZKUtil.java:[175,16]
>  [deprecation] toString(File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[44,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[67,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[131,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[150,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[169,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestZKUtil.java:[134,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java:[437,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java:[211,26]
>  [deprecation] toString(File,Charset) in Files has been deprecated
> 

[GitHub] [hadoop] DadanielZ opened a new pull request #768: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

2019-04-24 Thread GitBox
DadanielZ opened a new pull request #768: HADOOP-16269. ABFS: add 
listFileStatus with StartFrom.
URL: https://github.com/apache/hadoop/pull/768
 
 
   - Add support to list entries in a path from an entry name in lexical 
order(Azure Storage Service returns the entries in lexical order)
   - This support is added to AzureBlobFileSystemStore and won't be exposed to 
FS level api.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #766: YARN-9509: Added a configuration for admins to be able to capped per-container cpu usage based on a multiplier

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #766: YARN-9509: Added a configuration for 
admins to be able to capped per-container cpu usage based on a multiplier
URL: https://github.com/apache/hadoop/pull/766#issuecomment-486329548
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 182 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1114 | trunk passed |
   | +1 | compile | 518 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 87 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 146 | trunk passed |
   | +1 | javadoc | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 66 | the patch passed |
   | +1 | compile | 494 | the patch passed |
   | +1 | javac | 494 | the patch passed |
   | -0 | checkstyle | 65 | hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 220 unchanged - 0 fixed = 226 total (was 220) |
   | +1 | mvnsite | 81 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 173 | the patch passed |
   | +1 | javadoc | 71 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 49 | hadoop-yarn-api in the patch failed. |
   | +1 | unit | 1270 | hadoop-yarn-server-nodemanager in the patch passed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6062 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | TEST-TestYarnConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/766 |
   | JIRA Issue | YARN-9509 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux cf20ec8f1d32 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e1c5ddf |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/1/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: hadoop-yarn-project/hadoop-yarn |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-766/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16222) Fix new deprecations after guava 27.0 update in trunk

2019-04-24 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-16222:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix new deprecations after guava 27.0 update in trunk
> -
>
> Key: HADOOP-16222
> URL: https://issues.apache.org/jira/browse/HADOOP-16222
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16222.001.patch, HADOOP-16222.002.patch, 
> HADOOP-16222.003.patch, HADOOP-16222.004.patch
>
>
> *Note*: this can be done after the guava update.
> There are a bunch of new deprecations after the guava update. We need to fix 
> those, because these will be removed after the next guava version (after 27).
> I created a separate jira for this from HADOOP-16210 because jenkins 
> pre-commit test job (yetus) will time-out after 5 hours after running this 
> together. 
> {noformat}
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SemaphoredDelegatingExecutor.java:[110,20]
>  [deprecation] immediateFailedCheckedFuture(X) in Futures has been 
> deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ZKUtil.java:[175,16]
>  [deprecation] toString(File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[44,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[67,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[131,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[150,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[169,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestZKUtil.java:[134,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java:[437,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java:[211,26]
>  [deprecation] toString(File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java:[219,36]
>  [deprecation] toString(File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java:[130,9]
>  [deprecation] append(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java:[352,9]
>  [deprecation] append(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java:[1161,18]
>  [deprecation] propagate(Throwable) in Throwables has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/ServiceTestUtils.java:[413,18]
>  [deprecation] propagate(Throwable) in Throwables has been deprecated
> {noformat}
> Maybe fix these by module by module instead of a single patch?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (HADOOP-16222) Fix new deprecations after guava 27.0 update in trunk

2019-04-24 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825327#comment-16825327
 ] 

Sean Mackrory commented on HADOOP-16222:


+1. Note that it's no longer clean - YARN-9495 already committed the 
findbugs-exclude.xml changes, so only committing the rest of the patch.

> Fix new deprecations after guava 27.0 update in trunk
> -
>
> Key: HADOOP-16222
> URL: https://issues.apache.org/jira/browse/HADOOP-16222
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16222.001.patch, HADOOP-16222.002.patch, 
> HADOOP-16222.003.patch, HADOOP-16222.004.patch
>
>
> *Note*: this can be done after the guava update.
> There are a bunch of new deprecations after the guava update. We need to fix 
> those, because these will be removed after the next guava version (after 27).
> I created a separate jira for this from HADOOP-16210 because jenkins 
> pre-commit test job (yetus) will time-out after 5 hours after running this 
> together. 
> {noformat}
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SemaphoredDelegatingExecutor.java:[110,20]
>  [deprecation] immediateFailedCheckedFuture(X) in Futures has been 
> deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ZKUtil.java:[175,16]
>  [deprecation] toString(File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[44,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[67,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[131,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[150,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[169,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestZKUtil.java:[134,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java:[437,9]
>  [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java:[211,26]
>  [deprecation] toString(File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java:[219,36]
>  [deprecation] toString(File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java:[130,9]
>  [deprecation] append(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java:[352,9]
>  [deprecation] append(CharSequence,File,Charset) in Files has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java:[1161,18]
>  [deprecation] propagate(Throwable) in Throwables has been deprecated
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/ServiceTestUtils.java:[413,18]
>  [deprecation] propagate(Throwable) in Throwables has been deprecated
> {noformat}
> Maybe fix these by module by module instead of a single patch?



--
This message was 

[jira] [Commented] (HADOOP-16235) ABFS VersionedFileStatus to declare that it isEncrypted()

2019-04-24 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825318#comment-16825318
 ] 

Da Zhou commented on HADOOP-16235:
--

Thanks, LGTM.

> ABFS VersionedFileStatus to declare that it isEncrypted()
> -
>
> Key: HADOOP-16235
> URL: https://issues.apache.org/jira/browse/HADOOP-16235
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16235.001.patch
>
>
> Files in ABFS are always encrypted; have VersionedFileStatus.isEncrypted() 
> declare this, presumably just by changing the flag passed to the superclass's 
> constructor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #762: HDDS-1455. Inconsistent naming convention with Ozone Kerberos configu…

2019-04-24 Thread GitBox
xiaoyuyao commented on issue #762: HDDS-1455. Inconsistent naming convention 
with Ozone Kerberos configu…
URL: https://github.com/apache/hadoop/pull/762#issuecomment-486298721
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16252) Use configurable dynamo table name prefix in S3Guard tests

2019-04-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825281#comment-16825281
 ] 

Hudson commented on HADOOP-16252:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16461/])
HADOOP-16252. Add prefix to dynamo tables in tests. (stevel: rev 
e1c5ddf2aa854951142e234462978245cdb99e1d)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardConcurrentOps.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java


> Use configurable dynamo table name prefix in S3Guard tests
> --
>
> Key: HADOOP-16252
> URL: https://issues.apache.org/jira/browse/HADOOP-16252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ben Roling
>Assignee: Ben Roling
>Priority: Major
> Fix For: 3.3.0
>
>
> Table names are hardcoded into tests for S3Guard with DynamoDB.  This makes 
> it awkward to set up a least-privilege type AWS IAM user or role that can 
> successfully execute the full test suite.  You either have to know all the 
> specific hardcoded table names and give the user Dynamo read/write access to 
> those by name or just give blanket read/write access to all Dynamo tables in 
> the account.
> I propose the tests use a configuration property to specify a prefix for the 
> table names used.  Then the full test suite can be run by a user that is 
> given read/write access to all tables with names starting with the configured 
> prefix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13386) Upgrade Avro to 1.8.x

2019-04-24 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825266#comment-16825266
 ] 

Arpit Agarwal commented on HADOOP-13386:


[~Jantner] I've added you as a contributor and assigned this to you. Thanks for 
creating the pull request.

Could you please summarize the testing you've done to validate this change?

Also Steve had a question about transitive dependencies.

> Upgrade Avro to 1.8.x
> -
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Assignee: Kalman
>Priority: Major
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #767: HDDS-1462. Fix content and format of Ozone documentation

2019-04-24 Thread GitBox
elek opened a new pull request #767: HDDS-1462. Fix content and format of Ozone 
documentation
URL: https://github.com/apache/hadoop/pull/767
 
 
   During the review of HDDS-1457 I realized that the current documentation 
contains many outdated information regarding the usage of docker, build 
commands or s3 usage.
   
   The security information is also rendered in an incorrect way.
   
   The png files for the prometheus page are missing (were included in the 
patch of HDDS-846 but missing from the commit).
   
   See: https://issues.apache.org/jira/browse/HDDS-1462


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13386) Upgrade Avro to 1.8.x

2019-04-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HADOOP-13386:
--

Assignee: Kalman

> Upgrade Avro to 1.8.x
> -
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Assignee: Kalman
>Priority: Major
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-486246900
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1069 | trunk passed |
   | +1 | compile | 1048 | trunk passed |
   | +1 | checkstyle | 128 | trunk passed |
   | +1 | mvnsite | 170 | trunk passed |
   | +1 | shadedclient | 963 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 147 | trunk passed |
   | +1 | javadoc | 117 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 125 | the patch passed |
   | +1 | compile | 1003 | the patch passed |
   | +1 | javac | 1003 | the patch passed |
   | +1 | checkstyle | 129 | the patch passed |
   | +1 | mvnsite | 154 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 621 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 180 | the patch passed |
   | +1 | javadoc | 122 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 32 | client in the patch passed. |
   | +1 | unit | 91 | common in the patch passed. |
   | +1 | unit | 36 | client in the patch passed. |
   | -1 | unit | 2112 | integration-test in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8388 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/749 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux bcfea1a63aab 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 64f30da |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/3/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/3/testReport/ |
   | Max. process+thread count | 3955 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/client 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16252) Use configurable dynamo table name prefix in S3Guard tests

2019-04-24 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825178#comment-16825178
 ] 

Ben Roling commented on HADOOP-16252:
-

Awesome!

> Use configurable dynamo table name prefix in S3Guard tests
> --
>
> Key: HADOOP-16252
> URL: https://issues.apache.org/jira/browse/HADOOP-16252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ben Roling
>Assignee: Ben Roling
>Priority: Major
> Fix For: 3.3.0
>
>
> Table names are hardcoded into tests for S3Guard with DynamoDB.  This makes 
> it awkward to set up a least-privilege type AWS IAM user or role that can 
> successfully execute the full test suite.  You either have to know all the 
> specific hardcoded table names and give the user Dynamo read/write access to 
> those by name or just give blanket read/write access to all Dynamo tables in 
> the account.
> I propose the tests use a configuration property to specify a prefix for the 
> table names used.  Then the full test suite can be run by a user that is 
> given read/write access to all tables with names starting with the configured 
> prefix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16252) Use configurable dynamo table name prefix in S3Guard tests

2019-04-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16252.
-
   Resolution: Fixed
 Assignee: Ben Roling
Fix Version/s: 3.3.0

+1, committed to trunk. Thanks!

> Use configurable dynamo table name prefix in S3Guard tests
> --
>
> Key: HADOOP-16252
> URL: https://issues.apache.org/jira/browse/HADOOP-16252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ben Roling
>Assignee: Ben Roling
>Priority: Major
> Fix For: 3.3.0
>
>
> Table names are hardcoded into tests for S3Guard with DynamoDB.  This makes 
> it awkward to set up a least-privilege type AWS IAM user or role that can 
> successfully execute the full test suite.  You either have to know all the 
> specific hardcoded table names and give the user Dynamo read/write access to 
> those by name or just give blanket read/write access to all Dynamo tables in 
> the account.
> I propose the tests use a configuration property to specify a prefix for the 
> table names used.  Then the full test suite can be run by a user that is 
> given read/write access to all tables with names starting with the configured 
> prefix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #742: HADOOP-16252 add prefix to dynamo tables in tests

2019-04-24 Thread GitBox
steveloughran closed pull request #742: HADOOP-16252 add prefix to dynamo 
tables in tests
URL: https://github.com/apache/hadoop/pull/742
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #742: HADOOP-16252 add prefix to dynamo tables in tests

2019-04-24 Thread GitBox
steveloughran commented on issue #742: HADOOP-16252 add prefix to dynamo tables 
in tests
URL: https://github.com/apache/hadoop/pull/742#issuecomment-486244112
 
 
   merged via a pull-patch-commit operation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-486242755
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1031 | trunk passed |
   | +1 | compile | 1044 | trunk passed |
   | +1 | checkstyle | 136 | trunk passed |
   | +1 | mvnsite | 177 | trunk passed |
   | +1 | shadedclient | 989 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 147 | trunk passed |
   | +1 | javadoc | 113 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 120 | the patch passed |
   | +1 | compile | 994 | the patch passed |
   | +1 | javac | 994 | the patch passed |
   | +1 | checkstyle | 129 | the patch passed |
   | +1 | mvnsite | 159 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 633 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 177 | the patch passed |
   | +1 | javadoc | 120 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 29 | client in the patch passed. |
   | +1 | unit | 90 | common in the patch passed. |
   | +1 | unit | 35 | client in the patch passed. |
   | -1 | unit | 1483 | integration-test in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 7685 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/749 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux d6ea000991a5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 64f30da |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/4/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/4/testReport/ |
   | Max. process+thread count | 2802 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/client 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16274) transient failure of ITestS3GuardToolDynamoDB.testDestroyUnknownTable

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825166#comment-16825166
 ] 

Steve Loughran commented on HADOOP-16274:
-

{code}
[ERROR] Tests run: 16, Failures: 0, Errors: 2, Skipped: 1, Time elapsed: 
533.663 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[ERROR] 
testDestroyUnknownTable(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 143.671 s  <<< ERROR!
java.lang.IllegalArgumentException: Table ireland-team is not deleted.
at 
com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:505)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.destroy(DynamoDBMetadataStore.java:1003)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:651)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:398)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1628)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:127)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDestroyUnknownTable(ITestS3GuardToolDynamoDB.java:285)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.waiters.WaiterTimedOutException: Reached maximum 
attempts without transitioning to the desired state
at 
com.amazonaws.waiters.WaiterExecution.pollResource(WaiterExecution.java:86)
at com.amazonaws.waiters.WaiterImpl.run(WaiterImpl.java:88)
at 
com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:502)
... 22 more
{code}

> transient failure of ITestS3GuardToolDynamoDB.testDestroyUnknownTable
> -
>
> Key: HADOOP-16274
> URL: https://issues.apache.org/jira/browse/HADOOP-16274
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Experienced a transient failure of a test
> {code}
> [ERROR] 
> testDestroyUnknownTable(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 143.671 s  <<< ERROR!
> java.lang.IllegalArgumentException: Table ireland-team is not deleted.
> {code}
> * The test run blocked for a while; I'd assumed network problems, but maybe 
> it was retrying
> * verified on aWS console that the table was gone
> * Not surfaced on reruns
> I'm assuming this was transient, but anything going near creating tables runs 
> a risk of creating bills. We need to move to on-demand table creation as soon 
> as we upgrade the SDK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16274) transient failure of ITestS3GuardToolDynamoDB.testDestroyUnknownTable

2019-04-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16274:
---

 Summary: transient failure of 
ITestS3GuardToolDynamoDB.testDestroyUnknownTable
 Key: HADOOP-16274
 URL: https://issues.apache.org/jira/browse/HADOOP-16274
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


Experienced a transient failure of a test
{code}
[ERROR] 
testDestroyUnknownTable(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 143.671 s  <<< ERROR!
java.lang.IllegalArgumentException: Table ireland-team is not deleted.
{code}

* The test run blocked for a while; I'd assumed network problems, but maybe it 
was retrying
* verified on aWS console that the table was gone
* Not surfaced on reruns

I'm assuming this was transient, but anything going near creating tables runs a 
risk of creating bills. We need to move to on-demand table creation as soon as 
we upgrade the SDK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14936) S3Guard: remove "experimental" from documentation

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825161#comment-16825161
 ] 

Steve Loughran commented on HADOOP-14936:
-

HADOOP-16187 should be fixed

> S3Guard: remove "experimental" from documentation
> -
>
> Key: HADOOP-14936
> URL: https://issues.apache.org/jira/browse/HADOOP-14936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Priority: Major
>
> I think it is time to remove the "experimental feature" designation in the 
> site docs for S3Guard.  Discuss.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16101) Use lighter-weight alternatives to innerGetFileStatus where possible

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825160#comment-16825160
 ] 

Steve Loughran commented on HADOOP-16101:
-

Update, I think the new openFile() builder should take a withSource(FileStatus) 
param. If you already have the file status: no need to repeat yourself. We can 
do the same for rename. 

> Use lighter-weight alternatives to innerGetFileStatus where possible
> 
>
> Key: HADOOP-16101
> URL: https://issues.apache.org/jira/browse/HADOOP-16101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Priority: Major
>
> Discussion in HADOOP-15999 highlighted the heaviness of a full 
> innerGetFileStatus call, where many usages of it may need a lighter weight 
> fileExists, etc. Let's investigate usage of innerGetFileStatus and slim it 
> down where possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16187) ITestS3GuardToolDynamoDB test failures

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825157#comment-16825157
 ] 

Steve Loughran commented on HADOOP-16187:
-

ITestS3GuardToolDynamoDB.testBucketInfoUnguarded is still failing, raising FNFE
{code}

[ERROR] 
testBucketInfoUnguarded(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 1.979 s  <<< ERROR!
java.io.FileNotFoundException: DynamoDB table 
'testBucketInfoUnguarded-5d4dcfa5-d996-4ee5-9fec-92d46529e74d' does not exist 
in region eu-west-1; auto-creation is turned off
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1263)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:374)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:102)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:398)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3324)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3373)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3347)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:544)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1140)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.exec(S3GuardToolTestHelper.java:79)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.exec(S3GuardToolTestHelper.java:51)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testBucketInfoUnguarded(AbstractS3GuardToolTestBase.java:341)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: 
Requested resource not found: Table: 
testBucketInfoUnguarded-5d4dcfa5-d996-4ee5-9fec-92d46529e74d not found 
(Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: 
90H4BFI32UUJV9C5HUR49O5Q4NVV4KQNSO5AEMVJF66Q9ASUAAJG)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:3443)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:3419)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1660)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1635)
at 
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
at 

[GitHub] [hadoop] kittinanasi commented on issue #713: HDDS-1192. Support -conf command line argument in GenericCli

2019-04-24 Thread GitBox
kittinanasi commented on issue #713: HDDS-1192. Support -conf command line 
argument in GenericCli
URL: https://github.com/apache/hadoop/pull/713#issuecomment-486235917
 
 
   Thanks @elek for committing and for fixing the remaining checkstyle issue!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #713: HDDS-1192. Support -conf command line argument in GenericCli

2019-04-24 Thread GitBox
elek closed pull request #713: HDDS-1192. Support -conf command line argument 
in GenericCli
URL: https://github.com/apache/hadoop/pull/713
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #666: HADOOP-16221 add option to fail operation on metadata write failure

2019-04-24 Thread GitBox
steveloughran commented on a change in pull request #666: HADOOP-16221 add 
option to fail operation on metadata write failure
URL: https://github.com/apache/hadoop/pull/666#discussion_r278114271
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
 ##
 @@ -1522,6 +1522,19 @@
 
 
 
+
+  fs.s3a.metadatastore.fail.on.write.error
 
 Review comment:
   Ben - I think you are right. That "swallow the failures" probably follows on 
from the strategy of handling failures in the delete fake directories phase, 
where we don't want a failure there to escalate. Here we do as it is a sign 
that the store has gone inconsistent.
   
   If we mandate that update operations always fail the job, well, at least you 
get to find out when something has gone wrong. In which case: no need for an 
option, no need for extra complications in testing.
   
   *Is there anyone watching this who disagrees? If so, please justify*


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16272) Update HikariCP to 2.5.1

2019-04-24 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16272:
-
Summary: Update HikariCP to 2.5.1  (was: Update HikariCP to 3.3.1)

> Update HikariCP to 2.5.1
> 
>
> Key: HADOOP-16272
> URL: https://issues.apache.org/jira/browse/HADOOP-16272
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ashangit opened a new pull request #766: YARN-9509: Added a configuration for admins to be able to capped per-container cpu usage based on a multiplier

2019-04-24 Thread GitBox
ashangit opened a new pull request #766: YARN-9509: Added a configuration for 
admins to be able to capped per-container cpu usage based on a multiplier
URL: https://github.com/apache/hadoop/pull/766
 
 
   Add a multiplier configuration on strict resource usage to authorize 
container to use spare cpu up to a limit.
   Currently with strict resource usage you can't get more than what you 
request which is sometime not good for jobs that doesn't have a constant usage 
of cpu (for ex. spark jobs with multiple stages).
   But without strict resource usage we have seen some bad behaviour from our 
users that don't tune at all their needs and it leads to some containers 
requesting 2 vcore but constantly using 20.
   The idea here is to still authorize containers to get more cpu than what 
they request if some are free but also to avoid too big differencies so SLA on 
jobs is not breached if the cluster is full (at least increase of runtime is 
contain)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16273) Update mssql-jdbc to 7.2.2.jre8

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16273:


 Summary: Update mssql-jdbc to 7.2.2.jre8
 Key: HADOOP-16273
 URL: https://issues.apache.org/jira/browse/HADOOP-16273
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16272) Update HikariCP to 3.3.1

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16272:


 Summary: Update HikariCP to 3.3.1
 Key: HADOOP-16272
 URL: https://issues.apache.org/jira/browse/HADOOP-16272
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16271) Update okhttp to 3.14.1

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16271:


 Summary: Update okhttp to 3.14.1
 Key: HADOOP-16271
 URL: https://issues.apache.org/jira/browse/HADOOP-16271
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #568: HADOOP-15691 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-04-24 Thread GitBox
steveloughran commented on issue #568: HADOOP-15691 Add PathCapabilities to FS 
and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/568#issuecomment-486214780
 
 
   checkstyle
   ```
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java:30:import
 java.util.Locale;:8: Unused import - java.util.Locale. [UnusedImports]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonPathCapabilities.java:24:public
 final class CommonPathCapabilities {:1: Utility classes should not have a 
public or default constructor. [HideUtilityClassConstructor]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java:50:import
 org.apache.hadoop.fs.impl.PathCapabilitiesSupport;:8: Unused import - 
org.apache.hadoop.fs.impl.PathCapabilitiesSupport. [UnusedImports]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FsLinkResolution.java:76:
T apply(final AbstractFileSystem fs, final Path path):13: Redundant 'final' 
modifier. [RedundantModifier]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FsLinkResolution.java:76:
T apply(final AbstractFileSystem fs, final Path path):42: Redundant 'final' 
modifier. [RedundantModifier]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/PathCapabilitiesSupport.java:29:@InterfaceAudience.Private:
 Missing a Javadoc comment. [JavadocType]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/PathCapabilitiesSupport.java:29:@InterfaceAudience.Private:1:
 Utility classes should not have a public or default constructor. 
[HideUtilityClassConstructor]
   
./hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java:51:import
 java.util.Locale;:8: Unused import - java.util.Locale. [UnusedImports]
   
./hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java:80:import
 org.apache.hadoop.fs.PathCapabilities;:8: Unused import - 
org.apache.hadoop.fs.PathCapabilities. [UnusedImports]
   
./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java:88:import
 java.util.Locale;:8: Unused import - java.util.Locale. [UnusedImports]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestCommitOperations.java:27:import
 org.junit.Assume;:8: Unused import - org.junit.Assume. [UnusedImports]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16207) Fix ITestDirectoryCommitMRJob.testMRJob

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825115#comment-16825115
 ] 

Steve Loughran commented on HADOOP-16207:
-

FIx HADOOP-16184 and provided this is a non-auth test run, this will act as 
regression test to make sure the fix works in real situations

> Fix ITestDirectoryCommitMRJob.testMRJob
> ---
>
> Key: HADOOP-16207
> URL: https://issues.apache.org/jira/browse/HADOOP-16207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> Reported failure of {{ITestDirectoryCommitMRJob}} in validation runs of 
> HADOOP-16186; assertIsDirectory with s3guard enabled and a parallel test run: 
> Path "is recorded as deleted by S3Guard"
> {code}
> waitForConsistency();
> assertIsDirectory(outputPath) /* here */
> {code}
> The file is there but there's a tombstone. Possibilities
> * some race condition with another test
> * tombstones aren't timing out
> * committers aren't creating that base dir in a way which cleans up S3Guard's 
> tombstones. 
> Remember: we do have to delete that dest dir before the committer runs unless 
> overwrite==true, so at the start of the run there will be a tombstone. It 
> should be overwritten by a success.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #765: HDDS-1460: Add the optmizations of HDDS-1300 to BasicOzoneFileSystem

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #765: HDDS-1460: Add the optmizations of 
HDDS-1300 to BasicOzoneFileSystem
URL: https://github.com/apache/hadoop/pull/765#issuecomment-486210927
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 21 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 999 | trunk passed |
   | +1 | compile | 68 | trunk passed |
   | +1 | checkstyle | 16 | trunk passed |
   | +1 | mvnsite | 28 | trunk passed |
   | +1 | shadedclient | 661 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 39 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 25 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 754 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 38 | the patch passed |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 90 | ozonefs in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 2964 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-765/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/765 |
   | JIRA Issue | HDDS-1460 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 22e410472dd5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 64f30da |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-765/1/testReport/ |
   | Max. process+thread count | 3121 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-765/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825083#comment-16825083
 ] 

Steve Loughran commented on HADOOP-16269:
-

# checkstyle has some issues which need fixing
# Can you submit this as a github PR? We're getting better set up for reviewing 
and merging PRs against trunk there. 

thanks

> ABFS: add listFileStatus with StartFrom
> ---
>
> Key: HADOOP-16269
> URL: https://issues.apache.org/jira/browse/HADOOP-16269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, 
> HADOOP-16269-003.patch
>
>
> Adding a ListFileStatus in a path from a entry name in lexical order.
> This is added to AzureBlobFileSystemStore and won't be exposed to FS level 
> api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-24 Thread GitBox
steveloughran commented on issue #716: HADOOP-16205 Backporting ABFS driver 
from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#issuecomment-486201741
 
 
   checked out and applied to branch-2 (at commit #c9bfeb225b9); rebuilt, 
retested
   
   +1 for the patch, once the conflict with commit #313608e4596 is addressed 
(ie. remove the DTD changes from the checkstyles).
   
   Now, best way to merge this in? We were going to do the full sequence as a 
merge patch, weren't we? that is: 
   
   1. I locally fork branch-2
   1. apply the sequence of commits to it
   1. Merge that fork into branch-2 as a merge commit
   1. retest, if all good: commit to ASF repo?
   
   Given that this PR is ~this, if you can roll back to commit #e8dfdae066e, 
apply c3e474fa890 to that and then push it out again, that should be the entire 
branch history needed to commit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #753: HDDS-1403. KeyOutputStream writes fails after max retries while writing to a closed container

2019-04-24 Thread GitBox
bshashikant commented on issue #753: HDDS-1403. KeyOutputStream writes fails 
after max retries while writing to a closed container
URL: https://github.com/apache/hadoop/pull/753#issuecomment-486199790
 
 
   Thanks @arp7 . The retry interval should by default should be lower as, 
other than ContainerCloseExceptions, Ozone client retries in cases, where a 
request times out, or leader election could not complete etc where Ratis itself 
retries for a certain interval of time of around 10 mins .This retryInterval 
will again be added to to total time between two successive calls to OM in case 
of a failure. This is in the actual write path and will affect the write 
throughput considerably.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14936) S3Guard: remove "experimental" from documentation

2019-04-24 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14936:
---

Assignee: (was: Gabor Bota)

> S3Guard: remove "experimental" from documentation
> -
>
> Key: HADOOP-14936
> URL: https://issues.apache.org/jira/browse/HADOOP-14936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Priority: Major
>
> I think it is time to remove the "experimental feature" designation in the 
> site docs for S3Guard.  Discuss.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
bshashikant commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-486190628
 
 
   There is a related test failure with a test added with the change which 
requires RATIS-532.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #764: HDFS-14455:Fix typo in HAState.java

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #764: HDFS-14455:Fix typo in HAState.java
URL: https://github.com/apache/hadoop/pull/764#issuecomment-486190130
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1079 | trunk passed |
   | +1 | compile | 57 | trunk passed |
   | +1 | checkstyle | 43 | trunk passed |
   | +1 | mvnsite | 62 | trunk passed |
   | +1 | shadedclient | 718 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 119 | trunk passed |
   | +1 | javadoc | 50 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 59 | the patch passed |
   | +1 | compile | 54 | the patch passed |
   | +1 | javac | 54 | the patch passed |
   | +1 | checkstyle | 37 | the patch passed |
   | +1 | mvnsite | 57 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 686 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 127 | the patch passed |
   | +1 | javadoc | 49 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 4923 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 8242 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestFSDirectory |
   |   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.namenode.TestCheckpoint |
   |   | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands |
   |   | hadoop.hdfs.TestEncryptedTransfer |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemMBean |
   |   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
   |   | 
hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile |
   |   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
   |   | hadoop.hdfs.TestErasureCodingMultipleRacks |
   |   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-764/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/764 |
   | JIRA Issue | HDFS-14455 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 66072746aba7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 64f30da |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-764/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-764/1/testReport/ |
   | Max. process+thread count | 3405 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-764/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] lokeshj1703 opened a new pull request #765: HDDS-1460: Add the optmizations of HDDS-1300 to BasicOzoneFileSystem

2019-04-24 Thread GitBox
lokeshj1703 opened a new pull request #765: HDDS-1460: Add the optmizations of 
HDDS-1300 to BasicOzoneFileSystem
URL: https://github.com/apache/hadoop/pull/765
 
 
   Some of the optimizations made in HDDS-1300 were reverted in HDDS-1333. This 
Jira aims to bring back those optimizations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
bshashikant opened a new pull request #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant closed pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
bshashikant closed pull request #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
hadoop-yetus commented on issue #749: HDDS-1395. Key write fails with 
BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-486183782
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 8 | https://github.com/apache/hadoop/pull/749 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/749 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-749/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ehiggs commented on a change in pull request #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-04-24 Thread GitBox
ehiggs commented on a change in pull request #609: HADOOP-16193. add extra S3A 
MPU test to see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609#discussion_r278064407
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMultipartUploader.java
 ##
 @@ -159,4 +170,47 @@ public void testDirectoryInTheWay() throws Exception {
   public void testMultipartUploadReverseOrder() throws Exception {
 ContractTestUtils.skip("skipped for speed");
   }
+
+  /**
+   * This creates and then deletes a zero-byte file while an upload
+   * is in progress, and verifies that the uploaded file is ultimately
+   * visible.
+   */
+  @Test
+  public void testMultipartOverlapWithTransientFile() throws Throwable {
+// until there's a way to explicitly ask for a multipart uploader from a
+// specific FS, explicitly create one bonded to the raw FS.
+describe("testMultipartOverlapWithTransientFile");
+S3AFileSystem fs = getFileSystem();
+Path path = path("testMultipartOverlapWithTransientFile");
+fs.delete(path, true);
+MultipartUploader mpu = mpu(1);
+UploadHandle upload1 = mpu.initialize(path);
+byte[] dataset = dataset(1024, '0', 10);
+final Map handles = new HashMap<>();
+LOG.info("Uploading multipart entry");
+PartHandle value = mpu.putPart(path, new ByteArrayInputStream(dataset), 1,
+upload1,
+dataset.length);
+// upload 1K
+handles.put(1, value);
+// confirm the path is absent
+ContractTestUtils.assertPathDoesNotExist(fs,
+"path being uploaded", path);
+// now create an empty file
+ContractTestUtils.touch(fs, path);
+final FileStatus touchStatus = fs.getFileStatus(path);
+LOG.info("0-byte file has been created: {}", touchStatus);
+fs.delete(path, false);
+// now complete the upload
+mpu.complete(path, handles, upload1);
 
 Review comment:
   No, I mean mpu.complete returns a **Path**Handle. The part handles are 
**Part**Handles. Indeed PartHandles aren't useful out of the context of putPart 
(intentionally). But PathHandle is indeed useful and can be used to open the 
file to make sure it's the same file.
   
   We use the path later to get the mpuStatus, so maybe this is just moot.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ehiggs commented on a change in pull request #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-04-24 Thread GitBox
ehiggs commented on a change in pull request #609: HADOOP-16193. add extra S3A 
MPU test to see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609#discussion_r278064407
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMultipartUploader.java
 ##
 @@ -159,4 +170,47 @@ public void testDirectoryInTheWay() throws Exception {
   public void testMultipartUploadReverseOrder() throws Exception {
 ContractTestUtils.skip("skipped for speed");
   }
+
+  /**
+   * This creates and then deletes a zero-byte file while an upload
+   * is in progress, and verifies that the uploaded file is ultimately
+   * visible.
+   */
+  @Test
+  public void testMultipartOverlapWithTransientFile() throws Throwable {
+// until there's a way to explicitly ask for a multipart uploader from a
+// specific FS, explicitly create one bonded to the raw FS.
+describe("testMultipartOverlapWithTransientFile");
+S3AFileSystem fs = getFileSystem();
+Path path = path("testMultipartOverlapWithTransientFile");
+fs.delete(path, true);
+MultipartUploader mpu = mpu(1);
+UploadHandle upload1 = mpu.initialize(path);
+byte[] dataset = dataset(1024, '0', 10);
+final Map handles = new HashMap<>();
+LOG.info("Uploading multipart entry");
+PartHandle value = mpu.putPart(path, new ByteArrayInputStream(dataset), 1,
+upload1,
+dataset.length);
+// upload 1K
+handles.put(1, value);
+// confirm the path is absent
+ContractTestUtils.assertPathDoesNotExist(fs,
+"path being uploaded", path);
+// now create an empty file
+ContractTestUtils.touch(fs, path);
+final FileStatus touchStatus = fs.getFileStatus(path);
+LOG.info("0-byte file has been created: {}", touchStatus);
+fs.delete(path, false);
+// now complete the upload
+mpu.complete(path, handles, upload1);
 
 Review comment:
   No, I mean mpu.complete returns a **Path**Handle. The part handles are 
**Part**Handles. Indeed PartHandles aren't useful out of the context of putPart 
(intentionally). But PathHandle is indeed useful. :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #716: HADOOP-16205 Backporting ABFS driver from trunk to branch 2.0

2019-04-24 Thread GitBox
steveloughran commented on a change in pull request #716: HADOOP-16205 
Backporting ABFS driver from trunk to branch 2.0
URL: https://github.com/apache/hadoop/pull/716#discussion_r278053752
 
 

 ##
 File path: hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
 ##
 @@ -1,7 +1,7 @@
 
 http://www.puppycrawl.com/dtds/configuration_1_2.dtd;>
+"-//Checkstyle//DTD Checkstyle Configuration 1.2//EN"
+"https://checkstyle.org/dtds/configuration_1_2.dtd;>
 
 Review comment:
   This is from HADOOP-16232. I've cherry picked that patch on its own, so the 
changes to the checkstyle DTDs can all be left out of this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16232) Fix errors in the checkstyle configration xmls

2019-04-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824980#comment-16824980
 ] 

Steve Loughran commented on HADOOP-16232:
-

just backported this to branch-2 too, for consistency. thanks.

> Fix errors in the checkstyle configration xmls
> --
>
> Key: HADOOP-16232
> URL: https://issues.apache.org/jira/browse/HADOOP-16232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: newbie
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16232.001.patch, HADOOP-16232.002.patch, 
> HADOOP-16232.003.patch
>
>
> http://www.puppycrawl.com/dtds/configuration_1_2.dtd is not found and 
> https://checkstyle.org/dtds/configuration_1_2.dtd should be used instead.
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1094/artifact/out/xml.txt
> {noformat}
> hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml:
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> http://www.puppycrawl.com/dtds/configuration_1_2.dtd
>   at 
> jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:397)
>   at 
> jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:449)
>   at 
> jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:406)
>   at 
> jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:402)
>   at 
> jdk.nashorn.api.scripting.NashornScriptEngine.eval(NashornScriptEngine.java:155)
>   at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:264)
>   at com.sun.tools.script.shell.Main.evaluateString(Main.java:298)
>   at com.sun.tools.script.shell.Main.evaluateString(Main.java:319)
>   at com.sun.tools.script.shell.Main.access$300(Main.java:37)
>   at com.sun.tools.script.shell.Main$3.run(Main.java:217)
>   at com.sun.tools.script.shell.Main.main(Main.java:48)
> Caused by: java.io.FileNotFoundException: 
> http://www.puppycrawl.com/dtds/configuration_1_2.dtd
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1890)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:647)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(XMLEntityManager.java:1304)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(XMLEntityManager.java:1270)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(XMLDTDScannerImpl.java:264)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(XMLDocumentScannerImpl.java:1161)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(XMLDocumentScannerImpl.java:1045)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:959)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
>   at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
>   at 
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
>   at 
> com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:243)
>   at 
> com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
>   at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:205)
>   at 
> jdk.nashorn.internal.scripts.Script$Recompilation$2$19313A$\^system_init\_.XMLDocument(:747)
>   at jdk.nashorn.internal.scripts.Script$1$\^string\_.:program(:1)
>   at 
> jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:637)
>   at 
> jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:494)
>   at 
> jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:393)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hunshenshi opened a new pull request #764: HDFS-14455:Fix typo in HAState.java

2019-04-24 Thread GitBox
hunshenshi opened a new pull request #764: HDFS-14455:Fix typo in HAState.java
URL: https://github.com/apache/hadoop/pull/764
 
 
   There are some typo in HAState
   
   destructuve -> destructive
   
   Aleady -> Already
   
   Transtion -> Transition


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-04-24 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824873#comment-16824873
 ] 

Chao Sun commented on HADOOP-16266:
---

Thanks [~cgregori]. A few comments on patch v2:
 - I see that with the patch, {{RpcScheduler#addResponseTime(String, int, int, 
int)}} is no longer used anywhere. Curious what are the risks by removing it?
 - With the default implementation in {{RpcScheduler}}, you should not need to 
provide another impl in {{DefaultRpcScheduler}}.
 - Unused import in {{TestConsistentReadsObserver}}.
 - Still thinking whether the default time unit for {{ProcessingDetails}} can 
be nano instead micro. At the moment we are passing nano through 
{{ProcessingDetails#set()}} and then convert to micro, which seems unnecessary.
 - Some indentation is off such as {{DecayedRpcScheduler#addResponseTime}}.
 - Maybe it's worth pointing out that processing time always equal to 
{{lock_free + lock_wait + lock_shared + lock_exclusive)}} in 
{{ProcessingDetails}}?
 - We'll need to add documentation for the newly added metrics 
[here|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md].

> Add more fine-grained processing time metrics to the RPC layer
> --
>
> Key: HADOOP-16266
> URL: https://issues.apache.org/jira/browse/HADOOP-16266
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Christopher Gregorian
>Assignee: Christopher Gregorian
>Priority: Minor
>  Labels: rpc
> Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch
>
>
> Splitting off of HDFS-14403 to track the first part: introduces more 
> fine-grained measuring of how a call's processing time is split up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
bshashikant commented on a change in pull request #749: HDDS-1395. Key write 
fails with BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#discussion_r277983703
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/CommitWatcher.java
 ##
 @@ -188,7 +188,6 @@ void releaseBuffersOnException() {
*/
   public XceiverClientReply watchForCommit(long commitIndex)
   throws IOException {
-Preconditions.checkState(!commitIndex2flushedDataMap.isEmpty());
 
 Review comment:
   This was done to address HDDS-1436. watchForCommit is public API which can 
be called independently in tests for invalid commitIndexes as well irrespective 
of whether the commitIndrxFlushMap contains the log entry or not. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
bshashikant commented on a change in pull request #749: HDDS-1395. Key write 
fails with BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#discussion_r277982684
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 ##
 @@ -574,14 +574,18 @@ public void cleanup(boolean invalidateClient) {
* @throws IOException if stream is closed
*/
   private void checkOpen() throws IOException {
-if (xceiverClient == null) {
+if (isClosed()) {
   throw new IOException("BlockOutputStream has been closed.");
 } else if (getIoException() != null) {
   adjustBuffersOnException();
   throw getIoException();
 }
   }
 
+  public boolean isClosed() {
 
 Review comment:
   This needs to be pulic because this function gets invoked from 
BlockoutputStreamEntryPool#isClosed() and they are in different packages.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #749: HDDS-1395. Key write fails with BlockOutputStream has been closed exception

2019-04-24 Thread GitBox
bshashikant commented on a change in pull request #749: HDDS-1395. Key write 
fails with BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#discussion_r277982572
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 ##
 @@ -574,14 +574,18 @@ public void cleanup(boolean invalidateClient) {
* @throws IOException if stream is closed
*/
   private void checkOpen() throws IOException {
-if (xceiverClient == null) {
+if (isClosed()) {
   throw new IOException("BlockOutputStream has been closed.");
 } else if (getIoException() != null) {
   adjustBuffersOnException();
   throw getIoException();
 }
   }
 
+  public boolean isClosed() {
 
 Review comment:
   This needs to be pulic because this function gets invoked from 
BlockoutputStreamEntryPool#isClosed() and they are in different packages.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org