[jira] [Commented] (HDFS-9401) Fix findbugs warnings in BlockRecoveryWorker

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998125#comment-14998125
 ] 

Hudson commented on HDFS-9401:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #650 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/650/])
HDFS-9401. Fix findbugs warnings in BlockRecoveryWorker. Contributed by 
(waltersu4549: rev 2fda45b9dc9c0bf9bb1380134c80836e89d50471)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
* hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml


> Fix findbugs warnings in BlockRecoveryWorker
> 
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9401) Fix findbugs warnings in BlockRecoveryWorker

2015-11-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998107#comment-14998107
 ] 

Brahma Reddy Battula commented on HDFS-9401:


[~walter.k.su] thanks for committing and reviewing this issue.

> Fix findbugs warnings in BlockRecoveryWorker
> 
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-11-09 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9079:

Attachment: HDFS-9079.10.patch

Minor fix for test failures.

> Erasure coding: preallocate multiple generation stamps and serialize updates 
> from data streamers
> 
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9079-HDFS-7285.00.patch, HDFS-9079.01.patch, 
> HDFS-9079.02.patch, HDFS-9079.03.patch, HDFS-9079.04.patch, 
> HDFS-9079.05.patch, HDFS-9079.06.patch, HDFS-9079.07.patch, 
> HDFS-9079.08.patch, HDFS-9079.09.patch, HDFS-9079.10.patch
>
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) 
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) 
> Updates block on NN
> {code}
> With multiple streamer threads run in parallel, we need to correctly handle a 
> large number of possible combinations of interleaved thread events. For 
> example, {{streamer_B}} starts step 2 in between events {{streamer_A.2}} and 
> {{streamer_A.3}}.
> HDFS-9040 moves steps 1, 2, 3, 6 from streamer to {{DFSStripedOutputStream}}. 
> This JIRA proposes some further optimizations based on HDFS-9040:
> # We can preallocate GS when NN creates a new striped block group 
> ({{FSN#createNewBlock}}). For each new striped block group we can reserve 
> {{NUM_PARITY_BLOCKS}} GS's. If more than {{NUM_PARITY_BLOCKS}} errors have 
> happened we shouldn't try to further recover anyway.
> # We can use a dedicated event processor to offload the error handling logic 
> from {{DFSStripedOutputStream}}, which is not a long running daemon.
> # We can limit the lifespan of a streamer to be a single block. A streamer 
> ends either after finishing the current block or when encountering a DN 
> failure.
> With the proposed change, a {{StripedDataStreamer}}'s flow becomes:
> {code}
> 1) Finds DN error => 2) Notify coordinator (async, not waiting for response) 
> => terminates
> 1) Finds external error => 2) Applies new GS to DN (createBlockOutputStream) 
> => 3) Ack from DN => 4) Notify coordinator (async, not waiting for response)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9401) Fix findbugs warnings in BlockRecoveryWorker

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998084#comment-14998084
 ] 

Hudson commented on HDFS-9401:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1384 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1384/])
HDFS-9401. Fix findbugs warnings in BlockRecoveryWorker. Contributed by 
(waltersu4549: rev 2fda45b9dc9c0bf9bb1380134c80836e89d50471)
* hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java


> Fix findbugs warnings in BlockRecoveryWorker
> 
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998082#comment-14998082
 ] 

Hudson commented on HDFS-2261:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #590 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/590/])
HDFS-2261. AOP unit tests are not getting compiled or run. Contributed (wheat9: 
rev 94a1833638df0e23155f5ae61b81416627486a15)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/Pipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/HFlushAspects.aj
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiHFlushTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/protocol/ClientProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/RenameAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/FileDataServletAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/ListPathAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/PipelineTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/DataTransferTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/PipelinesTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiListPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataTransferProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/FSDatasetAspects.aj
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/DFSClientAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiPipelineClose.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiRename.java


> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9401) Fix findbugs warnings in BlockRecoveryWorker

2015-11-09 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9401:

Summary: Fix findbugs warnings in BlockRecoveryWorker  (was: Fix the 
findbug in 
o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover())

> Fix findbugs warnings in BlockRecoveryWorker
> 
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9401) Fix the findbug in o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998079#comment-14998079
 ] 

Hudson commented on HDFS-9401:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8783 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8783/])
HDFS-9401. Fix findbugs warnings in BlockRecoveryWorker. Contributed by 
(waltersu4549: rev 2fda45b9dc9c0bf9bb1380134c80836e89d50471)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java


> Fix the findbug in 
> o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9401) Fix the findbug in o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9401:

  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

committed to trunk and branch-2.

> Fix the findbug in 
> o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9401) Fix the findbug in o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998071#comment-14998071
 ] 

Walter Su commented on HDFS-9401:
-

+1. Thanks [~brahmareddy] for contribution. And Thanks [~cnauroth] for the 
detail. The jenkins output looks good.

> Fix the findbug in 
> o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-11-09 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998041#comment-14998041
 ] 

nijel commented on HDFS-9011:
-

looks like similar discussion happened in 
https://issues.apache.org/jira/browse/HDFS-8574.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9401) Fix the findbug in o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998011#comment-14998011
 ] 

Hadoop QA commented on HDFS-9401:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 21s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 33s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 19s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 174m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.blockmanagement.TestBlocksWi

[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997982#comment-14997982
 ] 

Hudson commented on HDFS-2261:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2529 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2529/])
HDFS-2261. AOP unit tests are not getting compiled or run. Contributed (wheat9: 
rev 94a1833638df0e23155f5ae61b81416627486a15)
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiHFlushTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/RenameAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/FSDatasetAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/Pipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/DFSClientAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiPipelineClose.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/ListPathAspects.aj
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataTransferProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/DataTransferTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/protocol/ClientProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/HFlushAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/FileDataServletAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/PipelineTest.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/PipelinesTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiListPath.java


> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9364) Unnecessary DNS resolution attempts when creating NameNodeProxies

2015-11-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997953#comment-14997953
 ] 

Xiao Chen commented on HDFS-9364:
-

Thanks Zhe.
The findbugs and asflicense warnings are not introduced by this patch.
The test failures seems not related and passed locally. 
({{TestRequestHedgingProxyProvider#testHedgingWhenOneFails}} appears to be 
failing for both jdk7 and jdk8, but looks unrelated and passed locally.)
The javac warning is expected, see original comment 
[here|https://issues.apache.org/jira/browse/HADOOP-9150?focusedCommentId=13547479&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13547479].

> Unnecessary DNS resolution attempts when creating NameNodeProxies
> -
>
> Key: HDFS-9364
> URL: https://issues.apache.org/jira/browse/HDFS-9364
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, performance
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9364.001.patch, HDFS-9364.002.patch, 
> HDFS-9364.003.patch, HDFS-9364.004.patch
>
>
> When creating NameNodeProxies, we always try to DNS-resolve namenode URIs. 
> This is unnecessary if the URI is logical, and may be significantly slow if 
> the DNS is having problems. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9401) Fix the findbug in o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997919#comment-14997919
 ] 

Brahma Reddy Battula commented on HDFS-9401:


[~cnauroth] did not seen the exclude.xml.Thanks for pointing the same..But it 
is not taking effect since class name given as datanode,hence I am removing 
from exclude.xml...[~walter.k.su] and [~cnauroth] kindly take a look at latest 
patch..

> Fix the findbug in 
> o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9401) Fix the findbug in o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9401:
---
Attachment: HDFS-9401-002.patch

> Fix the findbug in 
> o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9401-002.patch, HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997898#comment-14997898
 ] 

Hudson commented on HDFS-9249:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #589 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/589/])
HDFS-9249. NPE is thrown if an IOException is thrown in NameNode (yzhang: rev 
2741a2109b98d0febb463cb318018ecbd3995102)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java


> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Had

[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997895#comment-14997895
 ] 

Hudson commented on HDFS-2261:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1383 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1383/])
HDFS-2261. AOP unit tests are not getting compiled or run. Contributed (wheat9: 
rev 94a1833638df0e23155f5ae61b81416627486a15)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/PipelinesTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiPipelineClose.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/RenameAspects.aj
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/PipelineTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/DataTransferTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/FSDatasetAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/FileDataServletAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiListPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/ListPathAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/Pipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiHFlushTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataTransferProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/HFlushAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/protocol/ClientProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/DFSClientAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiTestUtil.java
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/FiConfig.java


> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9103) Retry reads on DN failure

2015-11-09 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997890#comment-14997890
 ] 

James Clampffer commented on HDFS-9103:
---

Thanks for taking a look!  I was hoping for some feedback like this.

"Let's remove the default argument and make it explicit for the caller."
Sounds good to me.  I'll get another patch up tomorrow.

"There is no need to do this. You can sort the lists by expiration time so that 
the fast path will always return in the first iteration."
The thinking here was it could be possible to have a handful of datanodes in 
the cluster that wouldn't be touched often, so the check in 
BadDataNodeTracker::IsBadNode would never be able to remove them from the map.  
I think realistically it'd only save 10s of KB in the absolute worst case 
(dozens seldom accessed nodes that had failed * ~100 bytes per map entry).  If 
you don't anticipate this situation being much of an issue in a typical HDFS 
cluster I'd be happy to remove it.

> Retry reads on DN failure
> -
>
> Key: HDFS-9103
> URL: https://issues.apache.org/jira/browse/HDFS-9103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Fix For: HDFS-8707
>
> Attachments: HDFS-9103.1.patch, HDFS-9103.2.patch, 
> HDFS-9103.HDFS-8707.006.patch, HDFS-9103.HDFS-8707.007.patch, 
> HDFS-9103.HDFS-8707.008.patch, HDFS-9103.HDFS-8707.3.patch, 
> HDFS-9103.HDFS-8707.4.patch, HDFS-9103.HDFS-8707.5.patch
>
>
> When AsyncPreadSome fails, add the failed DataNode to the excluded list and 
> try again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9364) Unnecessary DNS resolution attempts when creating NameNodeProxies

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997889#comment-14997889
 ] 

Hadoop QA commented on HDFS-9364:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 18s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 14s {color} 
| {color:red} hadoop-hdfs-project-jdk1.8.0_66 with JDK v1.8.0_66 generated 9 
new issues (was 29, now 30). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 21s 
{color} | {color:red} hadoop-hdfs-project-jdk1.7.0_79 with JDK v1.7.0_79 
generated 9 new issues (was 29, now 30). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 53s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 185m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestWriteReadStripedFile |
| 

[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997884#comment-14997884
 ] 

Hudson commented on HDFS-2261:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #649 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/649/])
HDFS-2261. AOP unit tests are not getting compiled or run. Contributed (wheat9: 
rev 94a1833638df0e23155f5ae61b81416627486a15)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/RenameAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/DFSClientAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiPipelineClose.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/FSDatasetAspects.aj
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/HFlushAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/FileDataServletAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiListPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/DataTransferTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/PipelinesTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/PipelineTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/Pipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/ListPathAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiHFlushTestUtil.java
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/protocol/ClientProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataTransferProtocolAspects.aj


> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997870#comment-14997870
 ] 

Hudson commented on HDFS-2261:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #660 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/660/])
HDFS-2261. AOP unit tests are not getting compiled or run. Contributed (wheat9: 
rev 94a1833638df0e23155f5ae61b81416627486a15)
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/RenameAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/ListPathAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiPipelineClose.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiListPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/PipelineTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/Pipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/DataTransferTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/protocol/ClientProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiHFlushTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/DFSClientAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/FSDatasetAspects.aj
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/PipelinesTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/FileDataServletAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/HFlushAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataTransferProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java


> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9400) TestRollingUpgradeRollback fails on branch-2.

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997871#comment-14997871
 ] 

Hadoop QA commented on HDFS-9400:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
50s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 19s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 5 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs-client (total was 57, now 57). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 11s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-10 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771477/HDFS-9400-branch-2.001.patch
 |
| JIRA Issue | HDFS-9400 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 5c640cedc021 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yet

[jira] [Commented] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997867#comment-14997867
 ] 

Hudson commented on HDFS-9249:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2589 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2589/])
HDFS-9249. NPE is thrown if an IOException is thrown in NameNode (yzhang: rev 
2741a2109b98d0febb463cb318018ecbd3995102)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Had

[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997868#comment-14997868
 ] 

Hudson commented on HDFS-2261:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2589 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2589/])
HDFS-2261. AOP unit tests are not getting compiled or run. Contributed (wheat9: 
rev 94a1833638df0e23155f5ae61b81416627486a15)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataTransferProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/Pipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/ListPathAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/PipelinesTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiHFlushTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/PipelineTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/DFSClientAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/protocol/ClientProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/FileDataServletAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiListPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/RenameAspects.aj
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiPipelineClose.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/FSDatasetAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/DataTransferTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/HFlushAspects.aj
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java


> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9400) TestRollingUpgradeRollback fails on branch-2.

2015-11-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997837#comment-14997837
 ] 

Mingliang Liu commented on HDFS-9400:
-

By the way, [~brahmareddy], feel free to consolidate the v1 patch to a refined 
one. I did not fully debug the failing test and may miss some context.

> TestRollingUpgradeRollback fails on branch-2.
> -
>
> Key: HDFS-9400
> URL: https://issues.apache.org/jira/browse/HDFS-9400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-9400-branch-2.001.patch, HDFS-9400-branch-2.patch
>
>
> During a Jenkins pre-commit run on branch-2 for the HDFS-9394 patch, we 
> noticed a pre-existing failure in {{TestRollingUpgradeRollback}}.  I have 
> confirmed that this test is failing in branch-2 only.  It passes in trunk, 
> and it passes in branch-2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9400) TestRollingUpgradeRollback fails on branch-2.

2015-11-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997838#comment-14997838
 ] 

Brahma Reddy Battula commented on HDFS-9400:


bq.The fix may be to simply revert the unnecessary checkOpen() method call 
brought by HDFS-8979.

Yes, patch looks fine to me..[~cnauroth] do you think same..?

> TestRollingUpgradeRollback fails on branch-2.
> -
>
> Key: HDFS-9400
> URL: https://issues.apache.org/jira/browse/HDFS-9400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-9400-branch-2.001.patch, HDFS-9400-branch-2.patch
>
>
> During a Jenkins pre-commit run on branch-2 for the HDFS-9394 patch, we 
> noticed a pre-existing failure in {{TestRollingUpgradeRollback}}.  I have 
> confirmed that this test is failing in branch-2 only.  It passes in trunk, 
> and it passes in branch-2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8979) Clean up checkstyle warnings in hadoop-hdfs-client module

2015-11-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997819#comment-14997819
 ] 

Brahma Reddy Battula commented on HDFS-8979:


bq.perhaps we can keep the commit and fix failures like HDFS-9400 separately. 
What your opinion?
Yes, this should be fine since this is bigpatch..I will post the patch in 
HDFS-9400.

> Clean up checkstyle warnings in hadoop-hdfs-client module
> -
>
> Key: HDFS-8979
> URL: https://issues.apache.org/jira/browse/HDFS-8979
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-8979.000.patch, HDFS-8979.001.patch, 
> HDFS-8979.002.patch
>
>
> This jira tracks the effort of cleaning up checkstyle warnings in 
> {{hadoop-hdfs-client}} module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-11-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997815#comment-14997815
 ] 

Haohui Mai commented on HDFS-9117:
--

bq. I can strip it down to the API you provided, but I wonder what use case it 
will be serving then.

Please do.

Many users use Hadoop in a controlled environment. They know where the 
configuration is and has preferences on not depending on environment variables 
as they can be changed easily. Cloud deployment is one example.

It's relatively starightforward to add these functionality to another layer but 
it's hard to take it out when it's coupled with the core layer.

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch, HDFS-9117.HDFS-8707.005.patch, 
> HDFS-9117.HDFS-8707.006.patch, HDFS-9117.HDFS-8707.008.patch, 
> HDFS-9117.HDFS-8707.009.patch, HDFS-9117.HDFS-8707.010.patch, 
> HDFS-9117.HDFS-8707.011.patch, HDFS-9117.HDFS-8707.012.patch, 
> HDFS-9117.HDFS-9288.007.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9400) TestRollingUpgradeRollback fails on branch-2.

2015-11-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9400:

Attachment: HDFS-9400-branch-2.001.patch

Thank you [~cnauroth] for narrowing down the cause of this failing test. The 
[HDFS-8979] was not supposed to contain logic change. Unfortunately it brought 
some changes which was actually added by [HDFS-8332] in trunk.

Thank you [~brahmareddy] for working on this and investigating the root cause. 
I totally agree with you that the root cause is calling {{checkOpen()}} in 
{{DFSClient}} class, brought by [HDFS-8979] which obviously was not aware of 
the revert of [HDFS-8332] from {{branch-2}} when committing.

The fix may be to simply revert the unnecessary {{checkOpen()}} method call 
brought by [HDFS-8979]. I tested the v1 patch locally on my Gentoo Linux and 
Mac, and it seems to work.

> TestRollingUpgradeRollback fails on branch-2.
> -
>
> Key: HDFS-9400
> URL: https://issues.apache.org/jira/browse/HDFS-9400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: HDFS-9400-branch-2.001.patch, HDFS-9400-branch-2.patch
>
>
> During a Jenkins pre-commit run on branch-2 for the HDFS-9394 patch, we 
> noticed a pre-existing failure in {{TestRollingUpgradeRollback}}.  I have 
> confirmed that this test is failing in branch-2 only.  It passes in trunk, 
> and it passes in branch-2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997804#comment-14997804
 ] 

Hudson commented on HDFS-9249:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2528 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2528/])
HDFS-9249. NPE is thrown if an IOException is thrown in NameNode (yzhang: rev 
2741a2109b98d0febb463cb318018ecbd3995102)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java


> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2

[jira] [Commented] (HDFS-7553) fix the TestDFSUpgradeWithHA due to BindException

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997795#comment-14997795
 ] 

Hadoop QA commented on HDFS-7553:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 21s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 17s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 124m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestWriteReadStripedFile |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestRecoverStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-09 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771461/HDFS-7553.002.patch |
| JIRA Issue | HDFS-7553 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 95b91f4157c7 3.13.0-36-lowlatency #63-Ubuntu SMP

[jira] [Commented] (HDFS-8979) Clean up checkstyle warnings in hadoop-hdfs-client module

2015-11-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997788#comment-14997788
 ] 

Mingliang Liu commented on HDFS-8979:
-

Thanks for your kind information. I had a look at the patch, and you're right, 
it brought {{checkOpen()}} to {{branch-2}} mistakenly. The reason was that I 
was not aware of the [HDFS-8332] was reverted in {{branch-2}} but not in 
{{trunk}}. I revisited the patch and found these are the only logic changes in 
the patch. As its commit contains changes to many evolving classes in the 
{{hdfs-client}} module, perhaps we can keep the commit and fix failures like 
[HDFS-9400] separately. What your opinion?

> Clean up checkstyle warnings in hadoop-hdfs-client module
> -
>
> Key: HDFS-8979
> URL: https://issues.apache.org/jira/browse/HDFS-8979
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-8979.000.patch, HDFS-8979.001.patch, 
> HDFS-8979.002.patch
>
>
> This jira tracks the effort of cleaning up checkstyle warnings in 
> {{hadoop-hdfs-client}} module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997787#comment-14997787
 ] 

Surendra Singh Lilhore commented on HDFS-9234:
--

Thanks [~xyao]

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997762#comment-14997762
 ] 

Hudson commented on HDFS-9249:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1382 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1382/])
HDFS-9249. NPE is thrown if an IOException is thrown in NameNode (yzhang: rev 
2741a2109b98d0febb463cb318018ecbd3995102)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java


> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2

[jira] [Commented] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997754#comment-14997754
 ] 

Hadoop QA commented on HDFS-9079:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 97 new checkstyle issues in 
hadoop-hdfs-project (total was 365, now 454). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 11s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client introduced 2 new 
FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 30s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 19s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 150m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl()  At 
DFSStripedOutputStream.java:is not thrown in 
org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl()  At 
DFSStripedOutputStream.java:[line 612] |
|  |  org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl() calls 
Thread.sle

[jira] [Commented] (HDFS-9387) Parse namenodeUri parameter only once in NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()

2015-11-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997743#comment-14997743
 ] 

Mingliang Liu commented on HDFS-9387:
-

Thanks for your further view, [~xyao].

The {{TestNNThroughputBenchmark}} is considered a driver to runt he benchmark, 
instead of unit testing the benchmark itself. There is no effort of unit 
testing the code largely because it's implemented in a way that uses too many 
nested classes and few public APIs. For example, the {{run()}} method contains 
the whole logic to drive the test and it's hard to unit test each step. In this 
case, one needs to start a real name node server and passes the URI to 
{{-namenode}} arguments in order to run this benchmark. As a result, it is hard 
to unit test the piece of code that parses arguments like {{-namenode}}.

Perhaps we can address the unit testing issue separately.

> Parse namenodeUri parameter only once in 
> NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()
> 
>
> Key: HDFS-9387
> URL: https://issues.apache.org/jira/browse/HDFS-9387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9387.000.patch
>
>
> In {{NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()}}, the   
> {{namenodeUri}} is always parsed from {{-namenode}} argument. This works just 
> fine if the {{-op}} parameter is not {{all}}, as the single benchmark will 
> need to parse the {{namenodeUri}} from args anyway.
> When the {{-op}} is {{all}}, namely all sub-benchmark will run, multiple 
> sub-benchmark will call the {{verifyOpArgument()}} method. In this case, the 
> first sub-benchmark reads the {{namenode}} argument and removes it from args. 
> The other sub-benchmarks will thereafter read {{null}} value since the 
> argument is removed. This contradicts the intension of providing {{namenode}} 
> for all sub-benchmarks.
> {code:title=current code}
>   try {
> namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
>   } catch (IllegalArgumentException iae) {
> printUsage();
>   }
> {code}
> The fix is to parse the {{namenodeUri}}, which is shared by all 
> sub-benchmarks, from {{-namenode}} argument only once. This follows the 
> convention of parsing other global arguments in 
> {{OperationStatsBase#verifyOpArgument()}}.
> {code:title=simple fix}
>   if (args.indexOf("-namenode") >= 0) {
> try {
>   namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
> } catch (IllegalArgumentException iae) {
>   printUsage();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997727#comment-14997727
 ] 

Hudson commented on HDFS-9249:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #659 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/659/])
HDFS-9249. NPE is thrown if an IOException is thrown in NameNode (yzhang: rev 
2741a2109b98d0febb463cb318018ecbd3995102)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Had

[jira] [Commented] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997723#comment-14997723
 ] 

Hudson commented on HDFS-9249:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #648 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/648/])
HDFS-9249. NPE is thrown if an IOException is thrown in NameNode (yzhang: rev 
2741a2109b98d0febb463cb318018ecbd3995102)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java


> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CD

[jira] [Commented] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread

2015-11-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997719#comment-14997719
 ] 

Andrew Wang commented on HDFS-9405:
---

+1, if we have a background service for synchronous KMS operations, these would 
be good things to tackle too.

I think the hardest part of all this is going to be error handling though. 
Right now we get a stack trace on the client after blocking for a while, which 
is pretty clear. If it's async, we'll need some new NN metrics, and also make 
sure the client still has reasonable behavior and useful messages too. 
RetryStartFileException is related.

> When starting a file, NameNode should generate EDEK in a separate thread
> 
>
> Key: HDFS-9405
> URL: https://issues.apache.org/jira/browse/HDFS-9405
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>
> {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation 
> to the key provider, which could be slow or cause timeout. It should be done 
> as a separate thread so as to return a proper error message to the RPC caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread

2015-11-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997709#comment-14997709
 ] 

Arun Suresh commented on HDFS-9405:
---

Makes sense.. IIRC, we were planning on cycling through all EZ keys and calling 
warmUp on each key on NN startup / failover, don't think it was done though. 
Having an asnyc thread do this at startup might also make sense ? 

> When starting a file, NameNode should generate EDEK in a separate thread
> 
>
> Key: HDFS-9405
> URL: https://issues.apache.org/jira/browse/HDFS-9405
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>
> {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation 
> to the key provider, which could be slow or cause timeout. It should be done 
> as a separate thread so as to return a proper error message to the RPC caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread

2015-11-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997674#comment-14997674
 ] 

Andrew Wang commented on HDFS-9405:
---

Yup good points Arun. As you noted though, we do the initial cache warmup 
synchronously, which happens when we create an encryption zone. I'd like to 
move this to a background thread so it's not blocking an RPC handler if the KMS 
is down. I think this same issue of blocking an RPC handler can happen in 
startFile after a NN cold start or failover.

> When starting a file, NameNode should generate EDEK in a separate thread
> 
>
> Key: HDFS-9405
> URL: https://issues.apache.org/jira/browse/HDFS-9405
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>
> {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation 
> to the key provider, which could be slow or cause timeout. It should be done 
> as a separate thread so as to return a proper error message to the RPC caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9103) Retry reads on DN failure

2015-11-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997662#comment-14997662
 ] 

Haohui Mai commented on HDFS-9103:
--

Just take a quick skim.

{code}
   template 
   void AsyncPreadSome(size_t offset, const MutableBufferSequence &buffers,
-  const std::set &excluded_datanodes,
-  const Handler &handler);
+  const Handler &handler,
+  std::shared_ptr 
optional_rule_override = nullptr);

151   std::shared_ptr rule = optional_exclude_rule != 
nullptr ?
152 optional_exclude_rule : 
bad_node_tracker_;
{code}

Let's remove the default argument and make it explicit for the caller.

{code}
+  /* prune orphaned DNs from list periodically */
+  if(remove_counter_++ % 1024 == 0 && remove_counter_ != 0) {
+RemoveAllExpired();
+remove_counter_ = 0;
+  }
{code}

There is no need to do this. You can sort the lists by expiration time so that 
the fast path will always return in the first iteration.

> Retry reads on DN failure
> -
>
> Key: HDFS-9103
> URL: https://issues.apache.org/jira/browse/HDFS-9103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Fix For: HDFS-8707
>
> Attachments: HDFS-9103.1.patch, HDFS-9103.2.patch, 
> HDFS-9103.HDFS-8707.006.patch, HDFS-9103.HDFS-8707.007.patch, 
> HDFS-9103.HDFS-8707.008.patch, HDFS-9103.HDFS-8707.3.patch, 
> HDFS-9103.HDFS-8707.4.patch, HDFS-9103.HDFS-8707.5.patch
>
>
> When AsyncPreadSome fails, add the failed DataNode to the excluded list and 
> try again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997658#comment-14997658
 ] 

Hudson commented on HDFS-2261:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8782 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8782/])
HDFS-2261. AOP unit tests are not getting compiled or run. Contributed (wheat9: 
rev 94a1833638df0e23155f5ae61b81416627486a15)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/HFlushAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/BlockReceiverAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/protocol/ClientProtocolAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/PipelineTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/DataTransferProtocolAspects.aj
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/FiConfig.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiHFlushTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiPipelineClose.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/FiTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiPipelines.java
* 
hadoop-common-project/hadoop-common/src/test/aop/org/apache/hadoop/fi/ProbabilityModel.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/PipelinesTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/FSDatasetAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/TestFiHFlush.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/DFSClientAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/FileDataServletAspects.aj
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/RenameAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/namenode/ListPathAspects.aj
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/DataTransferTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/hdfs/server/datanode/TestFiDataTransferProtocol2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fs/TestFiListPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/aop/org/apache/hadoop/fi/Pipeline.java


> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-2261:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2.

> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread

2015-11-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997642#comment-14997642
 ] 

Arun Suresh commented on HDFS-9405:
---

I believe the *generateEncryptedDataEncryptionKey* actually calls the 
configured key provider's *generateEncryptedKey* method. If encryption is 
enabled, this would generally be the *KMSClientProvider*.
The *KMSClientProvider* actually caches a bunch of EDEK when an encryption zone 
is created, so for other than the first EDEK (for which is call and response 
happens in the same thread) the rest of the EDEKs are actually picked from the 
cache.

> When starting a file, NameNode should generate EDEK in a separate thread
> 
>
> Key: HDFS-9405
> URL: https://issues.apache.org/jira/browse/HDFS-9405
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>
> {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation 
> to the key provider, which could be slow or cause timeout. It should be done 
> as a separate thread so as to return a proper error message to the RPC caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-2261) AOP unit tests are not getting compiled or run

2015-11-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai reassigned HDFS-2261:


Assignee: Haohui Mai

> AOP unit tests are not getting compiled or run 
> ---
>
> Key: HDFS-2261
> URL: https://issues.apache.org/jira/browse/HDFS-2261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha, 2.0.4-alpha
> Environment: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
> -compile-fault-inject ant target 
>Reporter: Giridharan Kesavan
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-2261.000.patch, hdfs-2261.patch
>
>
> The tests in src/test/aop are not getting compiled or run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7553) fix the TestDFSUpgradeWithHA due to BindException

2015-11-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-7553:

Attachment: HDFS-7553.002.patch

> fix the TestDFSUpgradeWithHA due to BindException
> -
>
> Key: HDFS-7553
> URL: https://issues.apache.org/jira/browse/HDFS-7553
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7553-001.txt, HDFS-7553.002.patch
>
>
> see 
> https://builds.apache.org/job/PreCommit-HDFS-Build/9092//testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestDFSUpgradeWithHA/testNfsUpgrade/
>  :
> Error Message
> Port in use: localhost:57896
> Stacktrace
> java.net.BindException: Port in use: localhost:57896
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:868)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:809)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:704)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:591)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:763)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:747)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1443)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1815)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testNfsUpgrade(TestDFSUpgradeWithHA.java:285)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7553) fix the TestDFSUpgradeWithHA due to BindException

2015-11-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997637#comment-14997637
 ] 

Xiao Chen commented on HDFS-7553:
-

I came across the same error:

*Error Message*
Port in use: localhost:36908
*Stacktrace*
{noformat}
java.net.BindException: Port in use: localhost:36908
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:939)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:880)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:754)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:643)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:818)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:797)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1820)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1801)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testRollbackWithNfs(TestDFSUpgradeWithHA.java:593)
{noformat}

And I agree with [~cnauroth]'s speculation regarding the race condition. Patch 
002 added the {{join}} method. I'm having a hard time reproducing the issue 
though, mainly because of the difficulty of reaching the BindException in 
jetty, so no unit test added.

As a side note, setting {{DFS_NAMENODE_HTTP_ADDRESS_KEY}} at the beginning 
would not work, because in {{NameNodeHttpServer#start}}, we {{conf.set}} it to 
the address (including port) of the open connection. So when later in the test 
where {{cluster.restartNameNode}}, this configuration is read and will be used 
to start the http server. I feel no change is needed in this test since the RC 
is on the connection side anyways.

> fix the TestDFSUpgradeWithHA due to BindException
> -
>
> Key: HDFS-7553
> URL: https://issues.apache.org/jira/browse/HDFS-7553
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7553-001.txt
>
>
> see 
> https://builds.apache.org/job/PreCommit-HDFS-Build/9092//testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestDFSUpgradeWithHA/testNfsUpgrade/
>  :
> Error Message
> Port in use: localhost:57896
> Stacktrace
> java.net.BindException: Port in use: localhost:57896
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:868)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:809)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:704)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:591)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:763)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:747)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1443)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1815)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1796)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testNfsUpgrade(TestDFSUpgradeWithHA.java:285)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997627#comment-14997627
 ] 

Hudson commented on HDFS-9234:
--

ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #588 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/588/])
HDFS-9234. WebHdfs: getContentSummary() should give quota for storage (xyao: 
rev 41d3f8899d8b96568f56331eaf598bb356ecdae0)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java


> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread

2015-11-09 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-9405:
---

 Summary: When starting a file, NameNode should generate EDEK in a 
separate thread
 Key: HDFS-9405
 URL: https://issues.apache.org/jira/browse/HDFS-9405
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: encryption, namenode
Affects Versions: 2.7.1
Reporter: Zhe Zhang


{{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation to 
the key provider, which could be slow or cause timeout. It should be done as a 
separate thread so as to return a proper error message to the RPC caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997620#comment-14997620
 ] 

Yongjun Zhang commented on HDFS-9249:
-

BTW, I created HDFS-9404 for the findbugs issue (not introduced by the patch 
here), and it turned out to be a duplicate of HDFS-9401.

> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_001-003
> 2015-10-14 19:45:07,835 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name2/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspac

[jira] [Updated] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9249:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

Thanks [~jojochuang] for the contribution and [~ste...@apache.org] for the 
review.


> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_001-003
> 2015-10-14 19:45:07,835 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name2/current/edits_inpro

[jira] [Commented] (HDFS-9364) Unnecessary DNS resolution attempts when creating NameNodeProxies

2015-11-09 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997575#comment-14997575
 ] 

Zhe Zhang commented on HDFS-9364:
-

Thanks Xiao. +1 pending Jenkins. I'm not sure why Jenkins didn't start on v4 
patch. I just manually triggered.

> Unnecessary DNS resolution attempts when creating NameNodeProxies
> -
>
> Key: HDFS-9364
> URL: https://issues.apache.org/jira/browse/HDFS-9364
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, performance
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9364.001.patch, HDFS-9364.002.patch, 
> HDFS-9364.003.patch, HDFS-9364.004.patch
>
>
> When creating NameNodeProxies, we always try to DNS-resolve namenode URIs. 
> This is unnecessary if the URI is logical, and may be significantly slow if 
> the DNS is having problems. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8425) [umbrella] Performance tuning, investigation and optimization for erasure coding

2015-11-09 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997520#comment-14997520
 ] 

Zhe Zhang commented on HDFS-8425:
-

bq. Per local test, it's 2.5x slower than repl. We need a faster codec
HADOOP-11887 has just shipped. It's much faster than the current coder.

I think it's very important to isolate different factors impacting performance. 
For this purpose, [~lirui] is working on HDFS-9345. With that we should be able 
to figure out any potential performance issue with the output logic (non-codec).

Rui is also working on HDFS-8968, which is a more comprehensive benchmark for 
EC I/O. 

> [umbrella] Performance tuning, investigation and optimization for erasure 
> coding
> 
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: testClientWriteReadFile_v1.pdf, 
> testdfsio-read-mbsec.png, testdfsio-write-mbsec.png
>
>
> This {{umbrella}} jira aims to track performance tuning, investigation and 
> optimization for erasure coding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997517#comment-14997517
 ] 

Hudson commented on HDFS-9249:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8781 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8781/])
HDFS-9249. NPE is thrown if an IOException is thrown in NameNode (yzhang: rev 
2741a2109b98d0febb463cb318018ecbd3995102)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBackupNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/ha

[jira] [Commented] (HDFS-9252) Change TestFileTruncate to use FsDatasetTestUtils to get block file size and genstamp.

2015-11-09 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997495#comment-14997495
 ] 

Lei (Eddy) Xu commented on HDFS-9252:
-

The failed tests are not relevant. All tests passed on my local machine. 

> Change TestFileTruncate to use FsDatasetTestUtils to get block file size and 
> genstamp.
> --
>
> Key: HDFS-9252
> URL: https://issues.apache.org/jira/browse/HDFS-9252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9252.00.patch, HDFS-9252.01.patch, 
> HDFS-9252.02.patch, HDFS-9252.03.patch, HDFS-9252.04.patch
>
>
> {{TestFileTruncate}} verifies block size and genstamp by directly accessing 
> the  local filesystem, e.g.:
> {code}
> assertTrue(cluster.getBlockMetadataFile(dn0,
>newBlock.getBlock()).getName().endsWith(
>newBlock.getBlock().getGenerationStamp() + ".meta"));
> {code}
> Lets abstract the fsdataset-special logic behind FsDatasetTestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997484#comment-14997484
 ] 

Hudson commented on HDFS-9234:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2527 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2527/])
HDFS-9234. WebHdfs: getContentSummary() should give quota for storage (xyao: 
rev 41d3f8899d8b96568f56331eaf598bb356ecdae0)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers

2015-11-09 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9079:

Attachment: HDFS-9079.09.patch

Fixed test failures from last Jenkins run, by addressing the following corner 
cases:
# In some test cases a streamer doesn't have any byte to write. Should properly 
handle the status of such streamers in the coordinator
# {{setExternalError}} should wait until the streamer is in {{DATA_STREAMING}} 
stage (i.e. {{blockStream}} is not null)

> Erasure coding: preallocate multiple generation stamps and serialize updates 
> from data streamers
> 
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9079-HDFS-7285.00.patch, HDFS-9079.01.patch, 
> HDFS-9079.02.patch, HDFS-9079.03.patch, HDFS-9079.04.patch, 
> HDFS-9079.05.patch, HDFS-9079.06.patch, HDFS-9079.07.patch, 
> HDFS-9079.08.patch, HDFS-9079.09.patch
>
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) 
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) 
> Updates block on NN
> {code}
> With multiple streamer threads run in parallel, we need to correctly handle a 
> large number of possible combinations of interleaved thread events. For 
> example, {{streamer_B}} starts step 2 in between events {{streamer_A.2}} and 
> {{streamer_A.3}}.
> HDFS-9040 moves steps 1, 2, 3, 6 from streamer to {{DFSStripedOutputStream}}. 
> This JIRA proposes some further optimizations based on HDFS-9040:
> # We can preallocate GS when NN creates a new striped block group 
> ({{FSN#createNewBlock}}). For each new striped block group we can reserve 
> {{NUM_PARITY_BLOCKS}} GS's. If more than {{NUM_PARITY_BLOCKS}} errors have 
> happened we shouldn't try to further recover anyway.
> # We can use a dedicated event processor to offload the error handling logic 
> from {{DFSStripedOutputStream}}, which is not a long running daemon.
> # We can limit the lifespan of a streamer to be a single block. A streamer 
> ends either after finishing the current block or when encountering a DN 
> failure.
> With the proposed change, a {{StripedDataStreamer}}'s flow becomes:
> {code}
> 1) Finds DN error => 2) Notify coordinator (async, not waiting for response) 
> => terminates
> 1) Finds external error => 2) Applies new GS to DN (createBlockOutputStream) 
> => 3) Ack from DN => 4) Notify coordinator (async, not waiting for response)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9404) Findbugs issue reported in BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997473#comment-14997473
 ] 

Yongjun Zhang commented on HDFS-9404:
-

Thanks [~cnauroth]! 


> Findbugs issue reported in 
> BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9404
> URL: https://issues.apache.org/jira/browse/HDFS-9404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Yongjun Zhang
>
> HDFS-9249 precommit jenkins run reported the following issue. The issue was 
> not introduced by HDFS-9249 patch. Filing this jira to report it.
> https://builds.apache.org/job/PreCommit-HDFS-Build/13431/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
> Code  Warning
> ECCall to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Details
> EC_UNRELATED_TYPES: Call to equals() comparing different types
> This method calls equals(Object) on two references of different class types 
> and analysis suggests they will be to objects of different classes at 
> runtime. Further, examination of the equals methods that would be invoked 
> suggest that either this call will always return false, or else the equals 
> method is not be symmetric (which is a property required by the contract for 
> equals in class Object).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9404) Findbugs issue reported in BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9404:

Description: 
HDFS-9249 precommit jenkins run reported the following issue. The issue was not 
introduced by HDFS-9249 patch. Filing this jira to report it.

https://builds.apache.org/job/PreCommit-HDFS-Build/13431/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html

CodeWarning
EC  Call to 
org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
 in 
org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

Details

EC_UNRELATED_TYPES: Call to equals() comparing different types

This method calls equals(Object) on two references of different class types and 
analysis suggests they will be to objects of different classes at runtime. 
Further, examination of the equals methods that would be invoked suggest that 
either this call will always return false, or else the equals method is not be 
symmetric (which is a property required by the contract for equals in class 
Object).


  was:
https://builds.apache.org/job/PreCommit-HDFS-Build/13431/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html

Reported:

CodeWarning
EC  Call to 
org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
 in 
org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

Details

EC_UNRELATED_TYPES: Call to equals() comparing different types

This method calls equals(Object) on two references of different class types and 
analysis suggests they will be to objects of different classes at runtime. 
Further, examination of the equals methods that would be invoked suggest that 
either this call will always return false, or else the equals method is not be 
symmetric (which is a property required by the contract for equals in class 
Object).



> Findbugs issue reported in 
> BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9404
> URL: https://issues.apache.org/jira/browse/HDFS-9404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Yongjun Zhang
>
> HDFS-9249 precommit jenkins run reported the following issue. The issue was 
> not introduced by HDFS-9249 patch. Filing this jira to report it.
> https://builds.apache.org/job/PreCommit-HDFS-Build/13431/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
> Code  Warning
> ECCall to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Details
> EC_UNRELATED_TYPES: Call to equals() comparing different types
> This method calls equals(Object) on two references of different class types 
> and analysis suggests they will be to objects of different classes at 
> runtime. Further, examination of the equals methods that would be invoked 
> suggest that either this call will always return false, or else the equals 
> method is not be symmetric (which is a property required by the contract for 
> equals in class Object).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9404) Findbugs issue reported in BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-9404.
-
Resolution: Duplicate

Hi [~yzhangal].  This is tracked in HDFS-9401.  Thanks!

> Findbugs issue reported in 
> BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9404
> URL: https://issues.apache.org/jira/browse/HDFS-9404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Yongjun Zhang
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/13431/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
> Reported:
> Code  Warning
> ECCall to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Details
> EC_UNRELATED_TYPES: Call to equals() comparing different types
> This method calls equals(Object) on two references of different class types 
> and analysis suggests they will be to objects of different classes at 
> runtime. Further, examination of the equals methods that would be invoked 
> suggest that either this call will always return false, or else the equals 
> method is not be symmetric (which is a property required by the contract for 
> equals in class Object).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9328) Formalize coding standards for libhdfs++ and put them in a README.txt

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997466#comment-14997466
 ] 

Hadoop QA commented on HDFS-9328:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 45s 
{color} | {color:red} Patch generated 425 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 3s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-09 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771442/HDFS-9328.HDFS-8707.002.patch
 |
| JIRA Issue | HDFS-9328 |
| Optional Tests |  asflicense  site  |
| uname | Linux 1c3aa42f6250 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
 |
| git revision | HDFS-8707 / 3ce4230 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13444/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max memory used | 30MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13444/console |


This message was automatically generated.



> Formalize coding standards for libhdfs++ and put them in a README.txt
> -
>
> Key: HDFS-9328
> URL: https://issues.apache.org/jira/browse/HDFS-9328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9328.HDFS-8707.000.patch, 
> HDFS-9328.HDFS-8707.001.patch, HDFS-9328.HDFS-8707.002.patch
>
>
> We have 2-3 people working on this project full time and hopefully more 
> people will start contributing.  In order to efficiently scale we need a 
> single, easy to find, place where developers can check to make sure they are 
> following the coding standards of this project to both save their time and 
> save the time of people doing code reviews.
> The most practical place to do this seems like a README file in libhdfspp/. 
> The foundation of the standards is google's C++ guide found here: 
> https://google-styleguide.googlecode.com/svn/trunk/cppguide.html
> Any exceptions to google's standards or additional restrictions need to be 
> explicitly enumerated so there is one single point of reference for all 
> libhdfs++ code standards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9404) Findbugs issue reported in BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-9404:
---

 Summary: Findbugs issue reported in 
BlockRecoveryWorker$RecoveryTaskContiguous.recover()
 Key: HDFS-9404
 URL: https://issues.apache.org/jira/browse/HDFS-9404
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Yongjun Zhang


https://builds.apache.org/job/PreCommit-HDFS-Build/13431/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html

Reported:

CodeWarning
EC  Call to 
org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
 in 
org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

Details

EC_UNRELATED_TYPES: Call to equals() comparing different types

This method calls equals(Object) on two references of different class types and 
analysis suggests they will be to objects of different classes at runtime. 
Further, examination of the equals methods that would be invoked suggest that 
either this call will always return false, or else the equals method is not be 
symmetric (which is a property required by the contract for equals in class 
Object).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9103) Retry reads on DN failure

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997452#comment-14997452
 ] 

Hadoop QA commented on HDFS-9103:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 49s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
37s {color} | {color:green} HDFS-8707 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.7.0_85. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 11s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 11s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_85. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 13s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_85. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 13s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_85. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 12s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_85. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 425 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 19s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-09 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771437/HDFS-9103.HDFS-8707.008.patch
 |
| JIRA Issue | HDFS-9103 |
| Optional Tests |  asflicense  cc  unit  javac  compile  |
| uname | Linux 4b65a06243bd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
 |
| git revision | HDFS-8707 / 3ce4230 |
| Default Java | 1.7.0_85 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_85 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13443/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13443/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_85.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13443/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13443/artifact/patc

[jira] [Updated] (HDFS-9249) NPE is thrown if an IOException is thrown in NameNode constructor

2015-11-09 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9249:

Summary: NPE is thrown if an IOException is thrown in NameNode constructor  
(was: NPE thrown if an IOException is thrown in NameNode.)

> NPE is thrown if an IOException is thrown in NameNode constructor
> -
>
> Key: HDFS-9249
> URL: https://issues.apache.org/jira/browse/HDFS-9249
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9249.001.patch, HDFS-9249.002.patch, 
> HDFS-9249.003.patch, HDFS-9249.004.patch, HDFS-9249.005.patch, 
> HDFS-9249.006.patch
>
>
> This issue was found when running test case 
> TestBackupNode.testCheckpointNode, but upon closer look, the problem is not 
> due to the test case.
> Looks like an IOException was thrown in
> try {
>   initializeGenericKeys(conf, nsId, namenodeId);
>   initialize(conf);
>   try {
> haContext.writeLock();
> state.prepareToEnterState(haContext);
> state.enterState(haContext);
>   } finally {
> haContext.writeUnlock();
>   }
> causing the namenode to stop, but the namesystem was not yet properly 
> instantiated, causing NPE.
> I tried to reproduce locally, but to no avail.
> Because I could not reproduce the bug, and the log does not indicate what 
> caused the IOException, I suggest make this a supportability JIRA to log the 
> exception for future improvement.
> Stacktrace
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.getFSImage(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:210)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:827)
> at 
> org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:89)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1474)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNode(TestBackupNode.java:102)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpointNode(TestBackupNode.java:130)
> The last few lines of log:
> 2015-10-14 19:45:07,807 INFO namenode.NameNode 
> (NameNode.java:createNameNode(1422)) - createNameNode [-checkpoint]
> 2015-10-14 19:45:07,807 INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - CheckpointNode metrics system started 
> (again)
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(402)) - fs.defaultFS is 
> hdfs://localhost:37835
> 2015-10-14 19:45:07,808 INFO namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(422)) - Clients are to use 
> localhost:37835 to access this namenode/service.
> 2015-10-14 19:45:07,810 INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
> 2015-10-14 19:45:07,810 INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1298)) - Stopping services started for 
> active state
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1228)) - Ending log segment 1
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5306)) - NameNodeEditLogRoller was interrupted, exiting
> 2015-10-14 19:45:07,811 INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(703)) - Number of transactions: 3 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 4 SyncTimes(ms): 2 1 
> 2015-10-14 19:45:07,811 INFO namenode.FSNamesystem 
> (FSNamesystem.java:run(5373)) - LazyPersistFileScrubber was interrupted, 
> exiting
> 2015-10-14 19:45:07,822 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1/current/edits_001-003
> 2015-10-14 19:45:07,835 INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name2/current/edits_inprogress_001
>  -> 
> /data/jenkins/workspace/CDH5.5.0-Hadoop-HDFS-2.6.0/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/na

[jira] [Commented] (HDFS-9252) Change TestFileTruncate to use FsDatasetTestUtils to get block file size and genstamp.

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997446#comment-14997446
 ] 

Hadoop QA commented on HDFS-9252:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 139m 7s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestGetContentSummaryWithSnapshot |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication |
|   | hadoop.hdfs.TestFileCreation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-09 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771411/HDFS-9252.04.patch |
| JIRA Issue | HDFS-9252 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 0cda52020b95 3.13.0-36-lowlatency #63-Ubuntu SMP

[jira] [Updated] (HDFS-9328) Formalize coding standards for libhdfs++ and put them in a README.txt

2015-11-09 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9328:
--
Attachment: HDFS-9328.HDFS-8707.002.patch

Get rid of trailing whitespace

> Formalize coding standards for libhdfs++ and put them in a README.txt
> -
>
> Key: HDFS-9328
> URL: https://issues.apache.org/jira/browse/HDFS-9328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9328.HDFS-8707.000.patch, 
> HDFS-9328.HDFS-8707.001.patch, HDFS-9328.HDFS-8707.002.patch
>
>
> We have 2-3 people working on this project full time and hopefully more 
> people will start contributing.  In order to efficiently scale we need a 
> single, easy to find, place where developers can check to make sure they are 
> following the coding standards of this project to both save their time and 
> save the time of people doing code reviews.
> The most practical place to do this seems like a README file in libhdfspp/. 
> The foundation of the standards is google's C++ guide found here: 
> https://google-styleguide.googlecode.com/svn/trunk/cppguide.html
> Any exceptions to google's standards or additional restrictions need to be 
> explicitly enumerated so there is one single point of reference for all 
> libhdfs++ code standards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-11-09 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997424#comment-14997424
 ] 

James Clampffer commented on HDFS-9144:
---

There's a few things I like about this:
-std::function for callbacks makes it a lot easier to figure out what the API 
expects.
-Polymorphism makes things a lot more flexible than templates (and I expect the 
performance impact to be negligible).
-Separating stateful and stateless components makes the interfaces clear and I 
think will reduce the chances of introducing bugs that assume things won't 
change in certain places.
-Merging FileSystem/InputStream and HadoopFileSystem/FileHandle is a major 
improvement for maintainability.

Waiting on this is starting to block or at least complicate other work so I'd 
like it to get in soon, it seems like a solid improvement to me.

+1

> Refactor libhdfs into stateful/ephemeral objects
> 
>
> Key: HDFS-9144
> URL: https://issues.apache.org/jira/browse/HDFS-9144
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9144.HDFS-8707.001.patch, 
> HDFS-9144.HDFS-8707.002.patch
>
>
> In discussion for other efforts, we decided that we should separate several 
> concerns:
> * A posix-like FileSystem/FileHandle object (stream-based, positional reads)
> * An ephemeral ReadOperation object that holds the state for 
> reads-in-progress, which consumes
> * An immutable FileInfo object which holds the block map and file size (and 
> other metadata about the file that we assume will not change over the life of 
> the file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-11-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997438#comment-14997438
 ] 

Xiaoyu Yao commented on HDFS-9245:
--

cc: [~aw] who may have an answer to this. It looks like a Infra issue unrelated 
to this change. 
I will commit it later today if I have not heard any additional comments by EOD 
today. 

> Fix findbugs warnings in hdfs-nfs/WriteCtx
> --
>
> Key: HDFS-9245
> URL: https://issues.apache.org/jira/browse/HDFS-9245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9245.000.patch, HDFS-9245.001.patch, 
> HDFS-9245.002.patch
>
>
> There are findbugs warnings as follows, brought by [HDFS-9092].
> It seems fine to ignore them by write a filter rule in the 
> {{findbugsExcludeFile.xml}} file. 
> {code:xml}
>  instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
> {code}
> and
> {code:xml}
>  instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
>  name="originalCount" primary="true" signature="I">
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java">
> In WriteCtx.java
> 
> 
> Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-11-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997416#comment-14997416
 ] 

Mingliang Liu commented on HDFS-9245:
-

Thanks for the review, [~xyao]. I validated locally too and found the findbugs 
issue was gone with this patch.

Though there is findbugs warnings in the pre-patch findbugs warnings, the 
comment table says:
{quote}
The patch does not introduce any new Findbugs (version 3.0.0) warnings, and 
fixes 2 pre-existing warnings.
{quote}
It should be fine?

> Fix findbugs warnings in hdfs-nfs/WriteCtx
> --
>
> Key: HDFS-9245
> URL: https://issues.apache.org/jira/browse/HDFS-9245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9245.000.patch, HDFS-9245.001.patch, 
> HDFS-9245.002.patch
>
>
> There are findbugs warnings as follows, brought by [HDFS-9092].
> It seems fine to ignore them by write a filter rule in the 
> {{findbugsExcludeFile.xml}} file. 
> {code:xml}
>  instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
> {code}
> and
> {code:xml}
>  instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
>  name="originalCount" primary="true" signature="I">
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java">
> In WriteCtx.java
> 
> 
> Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9103) Retry reads on DN failure

2015-11-09 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9103:
--
Attachment: HDFS-9103.HDFS-8707.008.patch

I still need to get rid of some test duplication and write a couple good tests 
for AsyncPreadSome with an override but wanted to post this in case anyone was 
curious.

-Got rid of explicitly passing around the BadDataNodeTracker.  FileSystem and 
InputStream now keep shared_ptrs to the BadDataNodeTracker.  The tracker is 
used by default for methods like PositionRead.  

-I've added an abstraction, NodeExclusionRule with a uuid->bool virtual method 
for testing bad nodes so that the tracker can be overridden if the user want to 
in AsyncPreadSome.  Added a wrapper for std::set that inherits from this to 
make provide an easy way to pass in a set of nodes to exclude.

-Added unit tests for BadDataNodeTracker.  Added a method that can be used in 
tests to move time forward to make sure that nodes get kicked out after enough 
time has elapsed.

> Retry reads on DN failure
> -
>
> Key: HDFS-9103
> URL: https://issues.apache.org/jira/browse/HDFS-9103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Fix For: HDFS-8707
>
> Attachments: HDFS-9103.1.patch, HDFS-9103.2.patch, 
> HDFS-9103.HDFS-8707.006.patch, HDFS-9103.HDFS-8707.007.patch, 
> HDFS-9103.HDFS-8707.008.patch, HDFS-9103.HDFS-8707.3.patch, 
> HDFS-9103.HDFS-8707.4.patch, HDFS-9103.HDFS-8707.5.patch
>
>
> When AsyncPreadSome fails, add the failed DataNode to the excluded list and 
> try again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-11-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997377#comment-14997377
 ] 

Wei-Chiu Chuang commented on HDFS-6101:
---

The findbugs warning is being resolved in HDFS-9401 and unrelated to this patch.
Other test case failures are not related to this patch, because this patch only 
modified test code.

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, HDFS-6101.002.patch, 
> HDFS-6101.003.patch, HDFS-6101.004.patch, TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997340#comment-14997340
 ] 

Hudson commented on HDFS-9234:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2587 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2587/])
HDFS-9234. WebHdfs: getContentSummary() should give quota for storage (xyao: 
rev 41d3f8899d8b96568f56331eaf598bb356ecdae0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997306#comment-14997306
 ] 

Hudson commented on HDFS-9234:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1380 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1380/])
HDFS-9234. WebHdfs: getContentSummary() should give quota for storage (xyao: 
rev 41d3f8899d8b96568f56331eaf598bb356ecdae0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java


> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997251#comment-14997251
 ] 

Hadoop QA commented on HDFS-6101:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 46s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 49s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 132m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
| JDK v1.7.0_79 Failed junit tests | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-09 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771371/HDFS-6101.004.patch |
| JIRA Issue | HDFS-6101 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux c789

[jira] [Commented] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-11-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997239#comment-14997239
 ] 

Xiaoyu Yao commented on HDFS-9245:
--

Patch v002 LGTM. +1. I validated locally that it fixed the two findbugs issue 
on IS2_INCONSISTENT_SYNC. 

Not sure why Jenkins still reported the same issue with the latest patch. 
https://builds.apache.org/job/PreCommit-HDFS-Build/13201/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-nfs.html

> Fix findbugs warnings in hdfs-nfs/WriteCtx
> --
>
> Key: HDFS-9245
> URL: https://issues.apache.org/jira/browse/HDFS-9245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9245.000.patch, HDFS-9245.001.patch, 
> HDFS-9245.002.patch
>
>
> There are findbugs warnings as follows, brought by [HDFS-9092].
> It seems fine to ignore them by write a filter rule in the 
> {{findbugsExcludeFile.xml}} file. 
> {code:xml}
>  instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
> {code}
> and
> {code:xml}
>  instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
>  name="originalCount" primary="true" signature="I">
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java">
> In WriteCtx.java
> 
> 
> Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9252) Change TestFileTruncate to use FsDatasetTestUtils to get block file size and genstamp.

2015-11-09 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9252:

Attachment: HDFS-9252.04.patch

Rebase to trunk and trigger a new jenkins 

> Change TestFileTruncate to use FsDatasetTestUtils to get block file size and 
> genstamp.
> --
>
> Key: HDFS-9252
> URL: https://issues.apache.org/jira/browse/HDFS-9252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9252.00.patch, HDFS-9252.01.patch, 
> HDFS-9252.02.patch, HDFS-9252.03.patch, HDFS-9252.04.patch
>
>
> {{TestFileTruncate}} verifies block size and genstamp by directly accessing 
> the  local filesystem, e.g.:
> {code}
> assertTrue(cluster.getBlockMetadataFile(dn0,
>newBlock.getBlock()).getName().endsWith(
>newBlock.getBlock().getGenerationStamp() + ".meta"));
> {code}
> Lets abstract the fsdataset-special logic behind FsDatasetTestUtils.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9387) Parse namenodeUri parameter only once in NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()

2015-11-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997194#comment-14997194
 ] 

Xiaoyu Yao commented on HDFS-9387:
--

[~liuml07], Thanks for the explanation. It looks fine to me. Can you add a unit 
test to validate the patch? Fro example, 
TestNNThroughputBenchmark#testNNThroughput can be easily modified with a 
additional -namenode parameters to validate the fix. 

{code}
String[] args = new String[] {"-op", "all"};  -add -namenode uri
NNThroughputBenchmark.runBenchmark(conf, Arrays.asList(args));
{code}

> Parse namenodeUri parameter only once in 
> NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()
> 
>
> Key: HDFS-9387
> URL: https://issues.apache.org/jira/browse/HDFS-9387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9387.000.patch
>
>
> In {{NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()}}, the   
> {{namenodeUri}} is always parsed from {{-namenode}} argument. This works just 
> fine if the {{-op}} parameter is not {{all}}, as the single benchmark will 
> need to parse the {{namenodeUri}} from args anyway.
> When the {{-op}} is {{all}}, namely all sub-benchmark will run, multiple 
> sub-benchmark will call the {{verifyOpArgument()}} method. In this case, the 
> first sub-benchmark reads the {{namenode}} argument and removes it from args. 
> The other sub-benchmarks will thereafter read {{null}} value since the 
> argument is removed. This contradicts the intension of providing {{namenode}} 
> for all sub-benchmarks.
> {code:title=current code}
>   try {
> namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
>   } catch (IllegalArgumentException iae) {
> printUsage();
>   }
> {code}
> The fix is to parse the {{namenodeUri}}, which is shared by all 
> sub-benchmarks, from {{-namenode}} argument only once. This follows the 
> convention of parsing other global arguments in 
> {{OperationStatsBase#verifyOpArgument()}}.
> {code:title=simple fix}
>   if (args.indexOf("-namenode") >= 0) {
> try {
>   namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
> } catch (IllegalArgumentException iae) {
>   printUsage();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9387) Parse namenodeUri parameter only once in NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()

2015-11-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997164#comment-14997164
 ] 

Mingliang Liu commented on HDFS-9387:
-

Thanks for your review [~xyao].

Yes the implementation to parse the {{namenode}} argument is different from 
parsing other parameters. When parsing {{-namenode}}, It calls the 
{{StringUtils.popOptionWithArgument}}. In that helper method, if there is no 
following argument, it will throw an IllegalArgumentException. The 
{{verifyOpArgument}} catches the exception and calls {{printUsage()}} to exit. 
I think it should work just fine?

> Parse namenodeUri parameter only once in 
> NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()
> 
>
> Key: HDFS-9387
> URL: https://issues.apache.org/jira/browse/HDFS-9387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9387.000.patch
>
>
> In {{NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()}}, the   
> {{namenodeUri}} is always parsed from {{-namenode}} argument. This works just 
> fine if the {{-op}} parameter is not {{all}}, as the single benchmark will 
> need to parse the {{namenodeUri}} from args anyway.
> When the {{-op}} is {{all}}, namely all sub-benchmark will run, multiple 
> sub-benchmark will call the {{verifyOpArgument()}} method. In this case, the 
> first sub-benchmark reads the {{namenode}} argument and removes it from args. 
> The other sub-benchmarks will thereafter read {{null}} value since the 
> argument is removed. This contradicts the intension of providing {{namenode}} 
> for all sub-benchmarks.
> {code:title=current code}
>   try {
> namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
>   } catch (IllegalArgumentException iae) {
> printUsage();
>   }
> {code}
> The fix is to parse the {{namenodeUri}}, which is shared by all 
> sub-benchmarks, from {{-namenode}} argument only once. This follows the 
> convention of parsing other global arguments in 
> {{OperationStatsBase#verifyOpArgument()}}.
> {code:title=simple fix}
>   if (args.indexOf("-namenode") >= 0) {
> try {
>   namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
> } catch (IllegalArgumentException iae) {
>   printUsage();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9403) Erasure coding: some EC tests are missing timeout

2015-11-09 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-9403:
---

 Summary: Erasure coding: some EC tests are missing timeout
 Key: HDFS-9403
 URL: https://issues.apache.org/jira/browse/HDFS-9403
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding, test
Affects Versions: 3.0.0
Reporter: Zhe Zhang
Priority: Minor


EC data writing pipeline is still being worked on, and bugs could introduce 
program hang. We should add a timeout for all tests involving striped writing. 
I see at least the following:

* {{TestErasureCodingPolicies}}
* {{TestFileStatusWithECPolicy}}
* {{TestDFSStripedOutputStream}}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9347) Invariant assumption in TestQuorumJournalManager.shutdown() is wrong

2015-11-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9347:
--
Attachment: HDFS-9347.001.patch

Rev1: After calling IOUtils.cleanup(), periodically check and wait for all ipc 
client threads to finish. This patch does not require the fix for HADOOP-12532.

> Invariant assumption in TestQuorumJournalManager.shutdown() is wrong
> 
>
> Key: HDFS-9347
> URL: https://issues.apache.org/jira/browse/HDFS-9347
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9347.001.patch
>
>
> The code
> {code:title=TestTestQuorumJournalManager.java|borderStyle=solid}
> @After
>   public void shutdown() throws IOException {
> IOUtils.cleanup(LOG, toClose.toArray(new Closeable[0]));
> 
> // Should not leak clients between tests -- this can cause flaky tests.
> // (See HDFS-4643)
> GenericTestUtils.assertNoThreadsMatching(".*IPC Client.*");
> 
> if (cluster != null) {
>   cluster.shutdown();
> }
>   }
> {code}
> implicitly assumes when the call returns from IOUtils.cleanup() (which calls 
> close() on QuorumJournalManager object), all IPC client connection threads 
> are terminated. However, there is no internal implementation that enforces 
> this assumption. Even if the bug reported in HADOOP-12532 is fixed, the 
> internal code still only ensures IPC connections are terminated, but not the 
> thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9380) HDFS-8707 builds are failing with protobuf directories as undef

2015-11-09 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997130#comment-14997130
 ] 

Bob Hansen commented on HDFS-9380:
--

Groovy.  Looking forward to it.  Thanks for the help.

> HDFS-8707 builds are failing with protobuf directories as undef
> ---
>
> Key: HDFS-9380
> URL: https://issues.apache.org/jira/browse/HDFS-9380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Haohui Mai
>
> See recent builds in HDFS-9320 and HDFS-9103.
> {code}
>  [exec] CMake Error: The following variables are used in this project, 
> but they are set to NOTFOUND.
>  [exec] Please set them or make sure they are set and tested correctly in 
> the CMake files:
>  [exec] PROTOBUF_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
>  [exec] linked by target "inputstream_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "remote_block_reader_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "rpc_engine_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] PROTOBUF_PROTOC_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9380) HDFS-8707 builds are failing with protobuf directories as undef

2015-11-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997128#comment-14997128
 ] 

Allen Wittenauer commented on HDFS-9380:


Probably this week, given there are two sets of patches... one to clean up 
Yetus Docker support and one to pretty much rewrite Hadoop's setup dev env bits.

> HDFS-8707 builds are failing with protobuf directories as undef
> ---
>
> Key: HDFS-9380
> URL: https://issues.apache.org/jira/browse/HDFS-9380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Haohui Mai
>
> See recent builds in HDFS-9320 and HDFS-9103.
> {code}
>  [exec] CMake Error: The following variables are used in this project, 
> but they are set to NOTFOUND.
>  [exec] Please set them or make sure they are set and tested correctly in 
> the CMake files:
>  [exec] PROTOBUF_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
>  [exec] linked by target "inputstream_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "remote_block_reader_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "rpc_engine_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] PROTOBUF_PROTOC_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9380) HDFS-8707 builds are failing with protobuf directories as undef

2015-11-09 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997124#comment-14997124
 ] 

Bob Hansen commented on HDFS-9380:
--

[~aw] - Thanks for the heads-up.  Is there an ETA for that work?  Hours? Days? 
Weeks?

> HDFS-8707 builds are failing with protobuf directories as undef
> ---
>
> Key: HDFS-9380
> URL: https://issues.apache.org/jira/browse/HDFS-9380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Haohui Mai
>
> See recent builds in HDFS-9320 and HDFS-9103.
> {code}
>  [exec] CMake Error: The following variables are used in this project, 
> but they are set to NOTFOUND.
>  [exec] Please set them or make sure they are set and tested correctly in 
> the CMake files:
>  [exec] PROTOBUF_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
>  [exec] linked by target "inputstream_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "remote_block_reader_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "rpc_engine_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] PROTOBUF_PROTOC_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9380) HDFS-8707 builds are failing with protobuf directories as undef

2015-11-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997111#comment-14997111
 ] 

Allen Wittenauer edited comment on HDFS-9380 at 11/9/15 6:46 PM:
-

We basically need to add protobuf-c support to the dockerfile. I'm planning on 
revamping Hadoop's current dockerfile so that Yetus can use it rather than the 
currently bundled one.


was (Author: aw):
We basically need to add protoc support to the dockerfile. I'm planning on 
revamping Hadoop's current dockerfile so that Yetus can use it rather than the 
currently bundled one.

> HDFS-8707 builds are failing with protobuf directories as undef
> ---
>
> Key: HDFS-9380
> URL: https://issues.apache.org/jira/browse/HDFS-9380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Haohui Mai
>
> See recent builds in HDFS-9320 and HDFS-9103.
> {code}
>  [exec] CMake Error: The following variables are used in this project, 
> but they are set to NOTFOUND.
>  [exec] Please set them or make sure they are set and tested correctly in 
> the CMake files:
>  [exec] PROTOBUF_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
>  [exec] linked by target "inputstream_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "remote_block_reader_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "rpc_engine_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] PROTOBUF_PROTOC_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9387) Parse namenodeUri parameter only once in NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()

2015-11-09 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997114#comment-14997114
 ] 

Xiaoyu Yao commented on HDFS-9387:
--

Thanks [~liuml07] for reporting the issue and posting the fix. The fix looks 
good to me overall. I just have one comment: Can you add additional check to 
ensure -namenode argument does come with a Uri and printUsage() otherwise like 
the handling of other parameters "-logLevel, etc."



> Parse namenodeUri parameter only once in 
> NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()
> 
>
> Key: HDFS-9387
> URL: https://issues.apache.org/jira/browse/HDFS-9387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9387.000.patch
>
>
> In {{NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()}}, the   
> {{namenodeUri}} is always parsed from {{-namenode}} argument. This works just 
> fine if the {{-op}} parameter is not {{all}}, as the single benchmark will 
> need to parse the {{namenodeUri}} from args anyway.
> When the {{-op}} is {{all}}, namely all sub-benchmark will run, multiple 
> sub-benchmark will call the {{verifyOpArgument()}} method. In this case, the 
> first sub-benchmark reads the {{namenode}} argument and removes it from args. 
> The other sub-benchmarks will thereafter read {{null}} value since the 
> argument is removed. This contradicts the intension of providing {{namenode}} 
> for all sub-benchmarks.
> {code:title=current code}
>   try {
> namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
>   } catch (IllegalArgumentException iae) {
> printUsage();
>   }
> {code}
> The fix is to parse the {{namenodeUri}}, which is shared by all 
> sub-benchmarks, from {{-namenode}} argument only once. This follows the 
> convention of parsing other global arguments in 
> {{OperationStatsBase#verifyOpArgument()}}.
> {code:title=simple fix}
>   if (args.indexOf("-namenode") >= 0) {
> try {
>   namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
> } catch (IllegalArgumentException iae) {
>   printUsage();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9380) HDFS-8707 builds are failing with protobuf directories as undef

2015-11-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997111#comment-14997111
 ] 

Allen Wittenauer commented on HDFS-9380:


We basically need to add protoc support to the dockerfile. I'm planning on 
revamping Hadoop's current dockerfile so that Yetus can use it rather than the 
currently bundled one.

> HDFS-8707 builds are failing with protobuf directories as undef
> ---
>
> Key: HDFS-9380
> URL: https://issues.apache.org/jira/browse/HDFS-9380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Haohui Mai
>
> See recent builds in HDFS-9320 and HDFS-9103.
> {code}
>  [exec] CMake Error: The following variables are used in this project, 
> but they are set to NOTFOUND.
>  [exec] Please set them or make sure they are set and tested correctly in 
> the CMake files:
>  [exec] PROTOBUF_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
>  [exec] linked by target "inputstream_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "remote_block_reader_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] linked by target "rpc_engine_test" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests
>  [exec] PROTOBUF_PROTOC_LIBRARY (ADVANCED)
>  [exec] linked by target "protoc-gen-hrpc" in directory 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997106#comment-14997106
 ] 

Hudson commented on HDFS-9234:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #657 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/657/])
HDFS-9234. WebHdfs: getContentSummary() should give quota for storage (xyao: 
rev 41d3f8899d8b96568f56331eaf598bb356ecdae0)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997076#comment-14997076
 ] 

Hudson commented on HDFS-9234:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #646 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/646/])
HDFS-9234. WebHdfs: getContentSummary() should give quota for storage (xyao: 
rev 41d3f8899d8b96568f56331eaf598bb356ecdae0)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java


> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9234:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~surendrasingh] for the contribution. I'v validate the unit test 
failures are not related to this change and committed the change to trunk and 
branch-2. 

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997025#comment-14997025
 ] 

Hudson commented on HDFS-9234:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8779 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8779/])
HDFS-9234. WebHdfs: getContentSummary() should give quota for storage (xyao: 
rev 41d3f8899d8b96568f56331eaf598bb356ecdae0)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java


> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch, 
> HDFS-9234-006.patch, HDFS-9234-007.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9255) Consolidate block recovery related implementation into a single class

2015-11-09 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997015#comment-14997015
 ] 

Zhe Zhang commented on HDFS-9255:
-

Good catch. Thanks Chris!

> Consolidate block recovery related implementation into a single class
> -
>
> Key: HDFS-9255
> URL: https://issues.apache.org/jira/browse/HDFS-9255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9255-branch-2.06.patch, HDFS-9255.01.patch, 
> HDFS-9255.02.patch, HDFS-9255.03.patch, HDFS-9255.04.patch, 
> HDFS-9255.05.patch, HDFS-9255.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9255) Consolidate block recovery related implementation into a single class

2015-11-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997012#comment-14997012
 ] 

Chris Nauroth commented on HDFS-9255:
-

This patch introduced a Findbugs warning, which is tracked in HDFS-9401.

> Consolidate block recovery related implementation into a single class
> -
>
> Key: HDFS-9255
> URL: https://issues.apache.org/jira/browse/HDFS-9255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9255-branch-2.06.patch, HDFS-9255.01.patch, 
> HDFS-9255.02.patch, HDFS-9255.03.patch, HDFS-9255.04.patch, 
> HDFS-9255.05.patch, HDFS-9255.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9401) Fix the findbug in o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()

2015-11-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14997010#comment-14997010
 ] 

Chris Nauroth commented on HDFS-9401:
-

It appears this was introduced by HDFS-9255.  Prior to HDFS-9255, the relevant 
code was in {{DataNode#recoverBlock}}:

{code}
  /** Recover a block */
  private void recoverBlock(RecoveringBlock rBlock) throws IOException {
ExtendedBlock block = rBlock.getBlock();
String blookPoolId = block.getBlockPoolId();
DatanodeID[] datanodeids = rBlock.getLocations();
List syncList = new ArrayList(datanodeids.length);
int errorCount = 0;

//check generation stamps
for(DatanodeID id : datanodeids) {
  try {
BPOfferService bpos = blockPoolManager.get(blookPoolId);
DatanodeRegistration bpReg = bpos.bpRegistration;
InterDatanodeProtocol datanode = bpReg.equals(id)?
this: DataNode.createInterDataNodeProtocolProxy(id, getConf(),
dnConf.socketTimeout, dnConf.connectToDnViaHostname);
{code}

There is a Findbugs suppression defined on that method:

{code}
 
 
   
   
   
 
{code}

HDFS-3837 describes the rationale for that suppression.

The HDFS-9255 refactoring moved this logic to {{BlockRecoveryWorker}}, but it 
did not update the Findbugs suppression.

[~walter.k.su], would you please help review and fix this?  Thanks!

> Fix the findbug in 
> o.a.h.h.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> ---
>
> Key: HDFS-9401
> URL: https://issues.apache.org/jira/browse/HDFS-9401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9401.patch
>
>
> Noticed following findbug in HDFS-9400 
> {noformat}
> Call to 
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(org.apache.hadoop.hdfs.protocol.DatanodeInfo)
>  in 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Bug type EC_UNRELATED_TYPES (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous
> In method 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover()
> Actual type org.apache.hadoop.hdfs.protocol.DatanodeInfo
> Expected org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration
> Value loaded from id
> Value loaded from bpReg
> org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration.equals(Object) 
> used to determine equality
> At BlockRecoveryWorker.java:[line 116]
> {noformat}
> https://builds.apache.org/job/PreCommit-HDFS-Build/13433/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9396) Total files and directories on jmx and web UI on standby is uninitialized

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14996987#comment-14996987
 ] 

Hadoop QA commented on HDFS-9396:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 3s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 29s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 112m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
| JDK v1.7.0_79 Failed junit tests | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-09 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771121/HDFS-9396.patch |
| JIRA Issue | HDFS-9396 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 72a4dd029485 3.13.0-36-lowlatency #63-Ubuntu SMP 

[jira] [Commented] (HDFS-9328) Formalize coding standards for libhdfs++ and put them in a README.txt

2015-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14996985#comment-14996985
 ] 

Hadoop QA commented on HDFS-9328:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 27s 
{color} | {color:red} Patch generated 425 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 50s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 
Image:test-patch-base-hadoop-date2015-11-09 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771114/HDFS-9328.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-9328 |
| Optional Tests |  asflicense  site  |
| uname | Linux 2eaeb5354019 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-ee5baeb/precommit/personality/hadoop.sh
 |
| git revision | HDFS-8707 / 3ce4230 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13439/artifact/patchprocess/whitespace-eol.txt
 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13439/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max memory used | 30MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13439/console |


This message was automatically generated.



> Formalize coding standards for libhdfs++ and put them in a README.txt
> -
>
> Key: HDFS-9328
> URL: https://issues.apache.org/jira/browse/HDFS-9328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9328.HDFS-8707.000.patch, 
> HDFS-9328.HDFS-8707.001.patch
>
>
> We have 2-3 people working on this project full time and hopefully more 
> people will start contributing.  In order to efficiently scale we need a 
> single, easy to find, place where developers can check to make sure they are 
> following the coding standards of this project to both save their time and 
> save the time of people doing code reviews.
> The most practical place to do this seems like a README file in libhdfspp/. 
> The foundation of the standards is google's C++ guide found here: 
> https://google-styleguide.googlecode.com/svn/trunk/cppguide.html
> Any exceptions to google's standards or additional restrictions need to be 
> explicitly enumerated so there is one single point of reference for all 
> libhdfs++ code standards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9383) TestByteArrayManager#testByteArrayManager fails

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14996954#comment-14996954
 ] 

Hudson commented on HDFS-9383:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #645 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/645/])
HDFS-9383. TestByteArrayManager#testByteArrayManager fails. Contributed 
(kihwal: rev ef926b2e3824475581454c1e17a0d7c94529efde)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestByteArrayManager#testByteArrayManager fails
> ---
>
> Key: HDFS-9383
> URL: https://issues.apache.org/jira/browse/HDFS-9383
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 3.0.0, 2.7.3
>
> Attachments: h9383_20151107.patch, hdfs-9383.log
>
>
> This was seen in the trunk builds
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> {noformat}
> Running org.apache.hadoop.hdfs.util.TestByteArrayManager
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.539 sec <<< 
> FAILURE!
>  - in org.apache.hadoop.hdfs.util.TestByteArrayManager
> testByteArrayManager(org.apache.hadoop.hdfs.util.TestByteArrayManager)  Time 
> elapsed: 5.409 sec  <<< FAILURE!
> java.lang.AssertionError: expected null, but was:<[32: 2/64, free=5]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.hdfs.util.TestByteArrayManager.testByteArrayManager(TestByteArrayManager.java:384)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9383) TestByteArrayManager#testByteArrayManager fails

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14996957#comment-14996957
 ] 

Hudson commented on HDFS-9383:
--

ABORTED: Integrated in Hadoop-Hdfs-trunk #2526 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2526/])
HDFS-9383. TestByteArrayManager#testByteArrayManager fails. Contributed 
(kihwal: rev ef926b2e3824475581454c1e17a0d7c94529efde)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java


> TestByteArrayManager#testByteArrayManager fails
> ---
>
> Key: HDFS-9383
> URL: https://issues.apache.org/jira/browse/HDFS-9383
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 3.0.0, 2.7.3
>
> Attachments: h9383_20151107.patch, hdfs-9383.log
>
>
> This was seen in the trunk builds
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> {noformat}
> Running org.apache.hadoop.hdfs.util.TestByteArrayManager
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.539 sec <<< 
> FAILURE!
>  - in org.apache.hadoop.hdfs.util.TestByteArrayManager
> testByteArrayManager(org.apache.hadoop.hdfs.util.TestByteArrayManager)  Time 
> elapsed: 5.409 sec  <<< FAILURE!
> java.lang.AssertionError: expected null, but was:<[32: 2/64, free=5]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.hdfs.util.TestByteArrayManager.testByteArrayManager(TestByteArrayManager.java:384)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9383) TestByteArrayManager#testByteArrayManager fails

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14996955#comment-14996955
 ] 

Hudson commented on HDFS-9383:
--

ABORTED: Integrated in Hadoop-Mapreduce-trunk #2586 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2586/])
HDFS-9383. TestByteArrayManager#testByteArrayManager fails. Contributed 
(kihwal: rev ef926b2e3824475581454c1e17a0d7c94529efde)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java


> TestByteArrayManager#testByteArrayManager fails
> ---
>
> Key: HDFS-9383
> URL: https://issues.apache.org/jira/browse/HDFS-9383
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 3.0.0, 2.7.3
>
> Attachments: h9383_20151107.patch, hdfs-9383.log
>
>
> This was seen in the trunk builds
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> {noformat}
> Running org.apache.hadoop.hdfs.util.TestByteArrayManager
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.539 sec <<< 
> FAILURE!
>  - in org.apache.hadoop.hdfs.util.TestByteArrayManager
> testByteArrayManager(org.apache.hadoop.hdfs.util.TestByteArrayManager)  Time 
> elapsed: 5.409 sec  <<< FAILURE!
> java.lang.AssertionError: expected null, but was:<[32: 2/64, free=5]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.hdfs.util.TestByteArrayManager.testByteArrayManager(TestByteArrayManager.java:384)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9383) TestByteArrayManager#testByteArrayManager fails

2015-11-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14996958#comment-14996958
 ] 

Hudson commented on HDFS-9383:
--

ABORTED: Integrated in Hadoop-Yarn-trunk-Java8 #656 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/656/])
HDFS-9383. TestByteArrayManager#testByteArrayManager fails. Contributed 
(kihwal: rev ef926b2e3824475581454c1e17a0d7c94529efde)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestByteArrayManager#testByteArrayManager fails
> ---
>
> Key: HDFS-9383
> URL: https://issues.apache.org/jira/browse/HDFS-9383
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 3.0.0, 2.7.3
>
> Attachments: h9383_20151107.patch, hdfs-9383.log
>
>
> This was seen in the trunk builds
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> {noformat}
> Running org.apache.hadoop.hdfs.util.TestByteArrayManager
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.539 sec <<< 
> FAILURE!
>  - in org.apache.hadoop.hdfs.util.TestByteArrayManager
> testByteArrayManager(org.apache.hadoop.hdfs.util.TestByteArrayManager)  Time 
> elapsed: 5.409 sec  <<< FAILURE!
> java.lang.AssertionError: expected null, but was:<[32: 2/64, free=5]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.hdfs.util.TestByteArrayManager.testByteArrayManager(TestByteArrayManager.java:384)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >