[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-07 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948168#comment-14948168
 ] 

Vinayakumar B commented on HDFS-8630:
-

Thanks [~surendrasingh] for working on this.

Have few comments
1. Need to rebase the patch with latest trunk code.
2. Some places formatting is not proper, so please format code for changed lines
3. In WebHdfsFileSystem.java {{XAttrEncodingParam}} not required to pass.
  {code}+return new FsPathResponseRunner>(
+op, null, new XAttrEncodingParam(XAttrCodec.HEX)) {{code}

4. In Tests, {{testGetAllStoragePolicy}}, I think you are just doing duplicate 
work. webhdfs also uses HTTP to get storage policies, and you are explicitly 
querying via PUT request. I think it would be better to get expected values via 
DFS, and actual values via both WebHDFS and direct PUT query.

5. Same #4, In {{testGetandSetStoragePolicy}}, each operation you can cross 
verify again by using DFS.

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948164#comment-14948164
 ] 

Hadoop QA commented on HDFS-8941:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  30m 14s | Pre-patch trunk has 748 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |  11m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  15m 47s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 36s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   4m 21s | The applied patch generated  1 
new checkstyle issues (total was 21, now 21). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 48s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   6m 56s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 43s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 112m 53s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 42s | Tests passed in 
hadoop-hdfs-client. |
| | | 190m 59s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestClusterId |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
|   | org.apache.hadoop.hdfs.server.namenode.TestProcessCorruptBlocks |
|   | org.apache.hadoop.hdfs.TestEncryptedTransfer |
|   | org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765509/HDFS-8941-04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 35affec |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12849/console |


This message was automatically generated.

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch, HDFS-8941-03.patch, HDFS-8941-04.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-07 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-8164:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-07 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948159#comment-14948159
 ] 

Yongjun Zhang commented on HDFS-8164:
-

Committed to trunk and branch-2. Thanks Xiao for the contribution.

Thanks [~cnauroth] for reporting the issue and [~vinayrpet] for the review.





> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948156#comment-14948156
 ] 

Hudson commented on HDFS-8164:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8593 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8593/])
HDFS-8164. cTime is 0 in VERSION file for newly formatted NameNode. (yzhang: 
rev 1107bd399c790467b22e55291c2611fd1c16e156)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Attachment: HDFS-9167.001.patch

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9167.000.patch, HDFS-9167.001.patch
>
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9034) "StorageTypeStats" Metric should not count failed storage.

2015-10-07 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948125#comment-14948125
 ] 

Surendra Singh Lilhore commented on HDFS-9034:
--

The failed tests and release audit warning are unrelated. I will fix checkstyle 
issues .
Please review..

> "StorageTypeStats" Metric should not count failed storage.
> --
>
> Key: HDFS-9034
> URL: https://issues.apache.org/jira/browse/HDFS-9034
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9034.01.patch, HDFS-9034.02.patch, 
> dfsStorage_NN_UI2.png
>
>
> When we remove one storage type from all the DNs, still NN UI shows entry of 
> those storage type --
> Ex:for ARCHIVE
> Steps--
> 1. ARCHIVE Storage type was added for all DNs
> 2. Stop DNs
> 3. Removed ARCHIVE Storages from all DNs
> 4. Restarted DNs
> NN UI shows below --
> DFS Storage Types
> Storage Type Configured Capacity Capacity Used Capacity Remaining 
> ARCHIVE   57.18 GB64 KB (0%)  39.82 GB (69.64%)   64 KB   
> 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-10-07 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-4167:
--
Status: Patch Available  (was: Open)

Addressed related test case failure and checkstyle errors. Please review

> Add support for restoring/rolling back to a snapshot
> 
>
> Key: HDFS-4167
> URL: https://issues.apache.org/jira/browse/HDFS-4167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Ajith S
> Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
> HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch, 
> HDFS-4167.05.patch, HDFS-4167.06.patch, HDFS-4167.07.patch, HDFS-4167.08.patch
>
>
> This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-10-07 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-4167:
--
Attachment: HDFS-4167.08.patch

> Add support for restoring/rolling back to a snapshot
> 
>
> Key: HDFS-4167
> URL: https://issues.apache.org/jira/browse/HDFS-4167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Ajith S
> Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
> HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch, 
> HDFS-4167.05.patch, HDFS-4167.06.patch, HDFS-4167.07.patch, HDFS-4167.08.patch
>
>
> This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-10-07 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-4167:
--
Status: Open  (was: Patch Available)

> Add support for restoring/rolling back to a snapshot
> 
>
> Key: HDFS-4167
> URL: https://issues.apache.org/jira/browse/HDFS-4167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Ajith S
> Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
> HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch, 
> HDFS-4167.05.patch, HDFS-4167.06.patch, HDFS-4167.07.patch, HDFS-4167.08.patch
>
>
> This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948118#comment-14948118
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #469 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/469/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948117#comment-14948117
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #469 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/469/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/XORErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* 
hadoop-

[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948115#comment-14948115
 ] 

Hadoop QA commented on HDFS-9139:
-

(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/12853/console in case of 
problems.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948113#comment-14948113
 ] 

Hadoop QA commented on HDFS-9167:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  23m 54s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  8s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 29s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 22s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   4m 41s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   0m 14s | Post-patch findbugs 
hadoop-client compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 28s | Post-patch findbugs hadoop-dist 
compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 43s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs-nfs compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 58s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal compilation is broken. |
| {color:red}-1{color} | findbugs |   1m 13s | Post-patch findbugs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
compilation is broken. |
| {color:red}-1{color} | findbugs |   1m 28s | Post-patch findbugs 
hadoop-mapreduce-project/hadoop-mapreduce-examples compilation is broken. |
| {color:red}-1{color} | findbugs |   1m 42s | Post-patch findbugs 
hadoop-tools/hadoop-archives compilation is broken. |
| {color:red}-1{color} | findbugs |   1m 58s | Post-patch findbugs 
hadoop-tools/hadoop-datajoin compilation is broken. |
| {color:red}-1{color} | findbugs |   2m 13s | Post-patch findbugs 
hadoop-tools/hadoop-distcp compilation is broken. |
| {color:red}-1{color} | findbugs |   2m 28s | Post-patch findbugs 
hadoop-tools/hadoop-extras compilation is broken. |
| {color:red}-1{color} | findbugs |   2m 42s | Post-patch findbugs 
hadoop-tools/hadoop-gridmix compilation is broken. |
| {color:red}-1{color} | findbugs |   2m 57s | Post-patch findbugs 
hadoop-tools/hadoop-rumen compilation is broken. |
| {color:red}-1{color} | findbugs |   3m 11s | Post-patch findbugs 
hadoop-tools/hadoop-streaming compilation is broken. |
| {color:green}+1{color} | findbugs |   3m 11s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   0m 10s | Pre-build of native portion |
| {color:green}+1{color} | client tests |   0m 13s | Tests passed in 
hadoop-client. |
| {color:green}+1{color} | dist tests |   0m 13s | Tests passed in hadoop-dist. 
|
| {color:green}+1{color} | mapreduce tests |   6m  6s | Tests passed in 
hadoop-mapreduce-client-hs. |
| {color:green}+1{color} | mapreduce tests |   0m 38s | Tests passed in 
hadoop-mapreduce-examples. |
| {color:red}-1{color} | tools/hadoop tests |   0m 14s | Tests failed in 
hadoop-archives. |
| {color:red}-1{color} | tools/hadoop tests |   0m 15s | Tests failed in 
hadoop-datajoin. |
| {color:green}+1{color} | tools/hadoop tests |   7m  3s | Tests passed in 
hadoop-distcp. |
| {color:red}-1{color} | tools/hadoop tests |   0m 20s | Tests failed in 
hadoop-extras. |
| {color:red}-1{color} | tools/hadoop tests |   1m 26s | Tests failed in 
hadoop-gridmix. |
| {color:green}+1{color} | tools/hadoop tests |   0m 20s | Tests passed in 
hadoop-rumen. |
| {color:red}-1{color} | tools/hadoop tests |   2m 55s | Tests failed in 
hadoop-streaming. |
| {color:green}+1{color} | hdfs tests |   1m 55s | Tests passed in 
hadoop-hdfs-nfs. |
| {color:green}+1{color} | hdfs tests |   5m 20s | Tests passed in bkjournal. |
| | |  80m  8s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHadoopArchives |
|   | hadoop.contrib.utils.join.TestDataJoin |
|   | hadoop.tools.TestDistCh |
|   | hadoop.mapred.gridmix.TestGridmixSubmission |
|   | hadoop.mapred.gridmix.TestLoadJob |
|   | hadoop.mapred.gridmix.TestSleepJob |
|   | hadoop.mapred.gridmix.TestDistCacheEmulation |
|   | hadoop.streaming.TestDumpTypedBytes |
|   | hadoop.streaming.TestMultipleCachefiles |
|   | hadoop.streaming.TestLoadTypedByte

[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948104#comment-14948104
 ] 

Hudson commented on HDFS-9137:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #506 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/506/])
HDFS-9137. DeadLock between DataNode#refreshVolumes and (yliu: rev 
35affec38e17e3f9c21d36be71476072c03f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948105#comment-14948105
 ] 

Hudson commented on HDFS-9137:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1234 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1234/])
HDFS-9137. DeadLock between DataNode#refreshVolumes and (yliu: rev 
35affec38e17e3f9c21d36be71476072c03f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948101#comment-14948101
 ] 

Hudson commented on HDFS-8632:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2407 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2407/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-hdfs-proje

[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948102#comment-14948102
 ] 

Hudson commented on HDFS-9209:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2407 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2407/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8442) Remove ServerLifecycleListener from kms/server.xml.

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948100#comment-14948100
 ] 

Hadoop QA commented on HDFS-8442:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  3s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 24s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |   1m 38s | Tests passed in 
hadoop-kms. |
| | |  38m 15s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734324/HDFS-8442-1.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 35affec |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12851/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-kms test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12851/artifact/patchprocess/testrun_hadoop-kms.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12851/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12851/console |


This message was automatically generated.

> Remove ServerLifecycleListener from kms/server.xml.
> ---
>
> Key: HDFS-8442
> URL: https://issues.apache.org/jira/browse/HDFS-8442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-8442-1.patch
>
>
> Remove ServerLifecycleListener from kms/server.xml.
> From tomcat Tomcat 7.0.9 the support for ServerLifecycleListener is removed
> ref : https://tomcat.apache.org/tomcat-7.0-doc/changelog.html
> Remove ServerLifecycleListener. This was already removed from server.xml and 
> with the Lifecycle re-factoring is no longer required. (markt)
> So if the build env is with tomcat later than this, kms startup is failing
> {code}
> SEVERE: Begin event threw exception
> java.lang.ClassNotFoundException: 
> org.apache.catalina.mbeans.ServerLifecycleListener
> {code}
> can we remove this listener ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948098#comment-14948098
 ] 

Mingliang Liu commented on HDFS-9204:
-

All failing tests could pass locally, and seem unrelated.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9114) NameNode and DataNode metric log file name should follow the other log file name format.

2015-10-07 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948095#comment-14948095
 ] 

Surendra Singh Lilhore commented on HDFS-9114:
--

Thanks [~aw] for initiating  discussion in Dev group

HDFS-8880 and HDFS-8953 added separate log4j configuration for metrics logger
{code}
# NameNode metrics logging.
# The default is to retain two namenode-metrics.log files up to 64MB each.
#
namenode.metrics.logger=INFO,NullAppender
log4j.logger.NameNodeMetricsLog=${namenode.metrics.logger}
log4j.additivity.NameNodeMetricsLog=false
log4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log
log4j.appender.NNMETRICSRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.NNMETRICSRFA.layout.ConversionPattern=%d{ISO8601}%m%n
log4j.appender.NNMETRICSRFA.MaxBackupIndex=1
log4j.appender.NNMETRICSRFA.MaxFileSize=64MB
#
# DataNode metrics logging.
# The default is to retain two datanode-metrics.log files up to 64MB each.
#
datanode.metrics.logger=INFO,NullAppender
log4j.logger.DataNodeMetricsLog=${datanode.metrics.logger}
log4j.additivity.DataNodeMetricsLog=false
log4j.appender.DNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DNMETRICSRFA.File=${hadoop.log.dir}/datanode-metrics.log
log4j.appender.DNMETRICSRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DNMETRICSRFA.layout.ConversionPattern=%d{ISO8601}%m%n
log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
{code}

In yarn also one jira is there for yarn metrics YARN-4192.

Through this jira I just want to make common configuration for all this logger.
For example:
{code}
# Metrics logging.
#
metrics.logger=INFO,NullAppender
hadoop.metrics.log.file=metrics.log
log4j.logger.MetricsLog=${metrics.logger}
log4j.additivity.MetricsLog=false
log4j.appender.METRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.METRICSRFA.File=${hadoop.log.dir}/${hadoop.metrics.log.file}
log4j.appender.METRICSRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.METRICSRFA.layout.ConversionPattern=%d{ISO8601} %m%n
log4j.appender.METRICSRFA.MaxBackupIndex=1
log4j.appender.METRICSRFA.MaxFileSize=64MB
{code}

[~arpitagarwal], [~aw], [~vinayrpet]
I you feel it’s not required then we can close this jira..

> NameNode and DataNode metric log file name should follow the other log file 
> name format.
> 
>
> Key: HDFS-9114
> URL: https://issues.apache.org/jira/browse/HDFS-9114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9114-branch-2.01.patch, 
> HDFS-9114-branch-2.02.patch, HDFS-9114-trunk.01.patch, 
> HDFS-9114-trunk.02.patch
>
>
> Currently datanode and namenode metric log file name is 
> {{datanode-metrics.log}} and {{namenode-metrics.log}}.
> This file name should be like {{hadoop-hdfs-namenode-metric-host192.log}} 
> same as namenode log file {{hadoop-hdfs-namenode-host192.log}}.
> This will help when we will copy log for issue analysis from different node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948072#comment-14948072
 ] 

Hadoop QA commented on HDFS-9210:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 23s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 11s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 22s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 29s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 35s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 188m 27s | Tests failed in hadoop-hdfs. |
| | | 235m 25s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765487/HDFS-9210.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / fde729f |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12846/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12846/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12846/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12846/console |


This message was automatically generated.

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8442) Remove ServerLifecycleListener from kms/server.xml.

2015-10-07 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948065#comment-14948065
 ] 

nijel commented on HDFS-8442:
-

Removing this line will impact the mbean registration in tomcat 6.

So i suggest to have 2 xmls, one for tomcat 6 based version and one for tomcat7 
versions
Based on the version passed, build time can choose which xml to use

any thought ? 

> Remove ServerLifecycleListener from kms/server.xml.
> ---
>
> Key: HDFS-8442
> URL: https://issues.apache.org/jira/browse/HDFS-8442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-8442-1.patch
>
>
> Remove ServerLifecycleListener from kms/server.xml.
> From tomcat Tomcat 7.0.9 the support for ServerLifecycleListener is removed
> ref : https://tomcat.apache.org/tomcat-7.0-doc/changelog.html
> Remove ServerLifecycleListener. This was already removed from server.xml and 
> with the Lifecycle re-factoring is no longer required. (markt)
> So if the build env is with tomcat later than this, kms startup is failing
> {code}
> SEVERE: Begin event threw exception
> java.lang.ClassNotFoundException: 
> org.apache.catalina.mbeans.ServerLifecycleListener
> {code}
> can we remove this listener ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948064#comment-14948064
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2440 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2440/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/XORErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/DumpUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-

[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948063#comment-14948063
 ] 

Surendra Singh Lilhore commented on HDFS-9209:
--

Thanks [~jingzhao] for review and commit...
Thanks [~zhz] for review.

> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948058#comment-14948058
 ] 

Hadoop QA commented on HDFS-4015:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  21m 46s | Pre-patch trunk has 748 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m  9s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 30s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 46s | The applied patch generated  3 
new checkstyle issues (total was 138, now 138). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 38s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 154m 54s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 34s | Tests passed in 
hadoop-hdfs-client. |
| | | 209m  6s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.balancer.TestBalancer |
| Timed out tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765483/HDFS-4015.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / fde729f |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12847/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12847/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12847/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12847/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12847/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12847/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12847/console |


This message was automatically generated.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks ha

[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Status: Patch Available  (was: Open)

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9167.000.patch
>
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Attachment: HDFS-9167.000.patch

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9167.000.patch
>
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Attachment: (was: HDFS-9167.000.patch)

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Attachment: HDFS-9167.000.patch

The v0 patch makes hadoop moduls that depend on {{hadoop-hdfs}}, depend on 
{{hadoop-hdfs-client}}. Specially:
# In {{DistCpSync}} class, uses {{DFSUtilClient}} instead of {{DFSUtil}}
# Makes modules like {{hadoop-streaming}} depend on {{hadoop-hdfs}}'s both 
{{jar}} and {{test-jar}} targets in test scope
# Keep {{hadoop-ant}} module depending on {{hadoop-hdfs}} as it uses 
{{HdfsConfiguration}} in {{src}}
# No need to exclude the {{commons-daemon}} dependency in {{hadoop-client}} as 
the {{hadoop-hdfs-client}} does not depend it

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9167.000.patch
>
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8164) cTime is 0 in VERSION file for newly formatted NameNode.

2015-10-07 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948043#comment-14948043
 ] 

Yongjun Zhang commented on HDFS-8164:
-

+1 on rev7, will commit momentarily.


> cTime is 0 in VERSION file for newly formatted NameNode.
> 
>
> Key: HDFS-8164
> URL: https://issues.apache.org/jira/browse/HDFS-8164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Chris Nauroth
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-8164.001.patch, HDFS-8164.002.patch, 
> HDFS-8164.003.patch, HDFS-8164.004.patch, HDFS-8164.005.patch, 
> HDFS-8164.006.patch, HDFS-8164.007.patch
>
>
> After formatting a NameNode and inspecting its VERSION file, the cTime 
> property shows 0.  The value does get updated to current time during an 
> upgrade, but I believe this is intended to be the creation time of the 
> cluster, and therefore the initial value of 0 before an upgrade can cause 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948029#comment-14948029
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1233 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1233/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCodingStep.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/DumpUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-pro

[jira] [Resolved] (HDFS-9212) Are there any official performance tests or reports using WebHDFS

2015-10-07 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu resolved HDFS-9212.
--
Resolution: Invalid

Send email to u...@hadoop.apache.org

> Are there any official performance tests or reports using WebHDFS
> -
>
> Key: HDFS-9212
> URL: https://issues.apache.org/jira/browse/HDFS-9212
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: webhdfs
>Reporter: Jingfei Hu
>Priority: Minor
>
> I'd like to know if there are any performance tests or reports when reading 
> and writing files using WebHDFS rest api. Or any design-time numbers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9212) Are there any official performance tests or reports using WebHDFS

2015-10-07 Thread Jingfei Hu (JIRA)
Jingfei Hu created HDFS-9212:


 Summary: Are there any official performance tests or reports using 
WebHDFS
 Key: HDFS-9212
 URL: https://issues.apache.org/jira/browse/HDFS-9212
 Project: Hadoop HDFS
  Issue Type: Test
  Components: webhdfs
Reporter: Jingfei Hu
Priority: Minor


I'd like to know if there are any performance tests or reports when reading and 
writing files using WebHDFS rest api. Or any design-time numbers?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948021#comment-14948021
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  26m 26s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |  13m 11s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  14m 50s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 28s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 19s | The applied patch generated  8 
new checkstyle issues (total was 203, now 207). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   2m 13s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 54s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 38s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 57s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 127m 24s | Tests failed in hadoop-hdfs. |
| | | 196m 27s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestINodeFile |
|   | org.apache.hadoop.hdfs.server.datanode.TestTriggerBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765475/h9205_20151008.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / fde729f |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/console |


This message was automatically generated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948014#comment-14948014
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #505 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/505/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/DumpUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-common-project/hadoop-commo

[jira] [Updated] (HDFS-9071) chooseTargets in ReplicationWork may pass incomplete srcPath

2015-10-07 Thread He Tianyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Tianyi updated HDFS-9071:

Attachment: HDFS-9071.0003.patch

Rebase.

> chooseTargets in ReplicationWork may pass incomplete srcPath
> 
>
> Key: HDFS-9071
> URL: https://issues.apache.org/jira/browse/HDFS-9071
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Attachments: HDFS-9071.0001.patch, HDFS-9071.0001.patch, 
> HDFS-9071.0003.patch
>
>
> I've observed that chooseTargets in ReplicationWork may pass incomplete 
> srcPath (not starting with '/') to block placement policy.
> It is possible that srcPath is extensively used in custom placement policy. 
> In this case, the incomplete srcPath may further cause AssertionError if try 
> to get INode with it inside placement policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948009#comment-14948009
 ] 

Rakesh R commented on HDFS-8632:


Thank you [~andrew.wang], [~zhz], [~walter.k.su] for the help in resolving this!

> Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 3.0.0
>
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-10-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948008#comment-14948008
 ] 

Rakesh R commented on HDFS-8941:


Thank you [~andrew.wang], I've re-based the patch in latest trunk code and 
attached the same. Please take a look at it again!

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch, HDFS-8941-03.patch, HDFS-8941-04.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-10-07 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8941:
---
Attachment: HDFS-8941-04.patch

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch, HDFS-8941-03.patch, HDFS-8941-04.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948002#comment-14948002
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #497 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/497/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/DumpUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/XORErasureCodec.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-

[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14948003#comment-14948003
 ] 

Hudson commented on HDFS-9137:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #497 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/497/])
HDFS-9137. DeadLock between DataNode#refreshVolumes and (yliu: rev 
35affec38e17e3f9c21d36be71476072c03f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9145) Tracking methods that hold FSNamesytemLock for too long

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947999#comment-14947999
 ] 

Hadoop QA commented on HDFS-9145:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 14s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m  2s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 28s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 17s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 21s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  13m  8s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 190m 28s | Tests failed in hadoop-hdfs. |
| | | 251m 28s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFSNamesystem |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.namenode.TestINodeAttributeProvider |
| Timed out tests | org.apache.hadoop.http.TestHttpServerLifecycle |
|   | 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled |
|   | org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765467/HDFS-9145.001.patch |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | trunk / fde729f |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12844/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12844/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12844/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12844/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12844/console |


This message was automatically generated.

> Tracking methods that hold FSNamesytemLock for too long
> ---
>
> Key: HDFS-9145
> URL: https://issues.apache.org/jira/browse/HDFS-9145
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9145.000.patch, HDFS-9145.001.patch
>
>
> It will be helpful that if we can have a way to track (or at least log a msg) 
> if some operation is holding the FSNamesystem lock for a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947988#comment-14947988
 ] 

Hudson commented on HDFS-9137:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8592 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8592/])
HDFS-9137. DeadLock between DataNode#refreshVolumes and (yliu: rev 
35affec38e17e3f9c21d36be71476072c03f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-9137:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947970#comment-14947970
 ] 

Andrew Wang commented on HDFS-8941:
---

Thanks for working on this Rakesh! Patch LGTM, but needs a small rebase. Will 
commit after it comes back from Jenkins.

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch, HDFS-8941-03.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947963#comment-14947963
 ] 

Yi Liu commented on HDFS-9137:
--

+1, thanks Uma, Colin and Vinay.

> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8894) Set SO_KEEPALIVE on DN server sockets

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947960#comment-14947960
 ] 

Andrew Wang commented on HDFS-8894:
---

Hey Kanaka, this patch needs a little rebase since DataStreamer moved. I'm +1 
though pending, sorry about the delay in getting to this.

> Set SO_KEEPALIVE on DN server sockets
> -
>
> Key: HDFS-8894
> URL: https://issues.apache.org/jira/browse/HDFS-8894
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Kanaka Kumar Avvaru
> Attachments: HDFS-8894-01.patch, HDFS-8894-01.patch, 
> HDFS-8894-02.patch, HDFS-8894-03.patch
>
>
> SO_KEEPALIVE is not set on things like datastreamer sockets which can cause 
> lingering ESTABLISHED sockets when there is a network glitch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947957#comment-14947957
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8591 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8591/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/XORErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/ha

[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947955#comment-14947955
 ] 

Hadoop QA commented on HDFS-9204:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 52s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 12s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 31s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 27s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 37s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 23s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 222m  1s | Tests failed in hadoop-hdfs. |
| | | 268m 33s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
| Timed out tests | org.apache.hadoop.hdfs.TestDecommission |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765462/HDFS-9204.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7fbf69b |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12842/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12842/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12842/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12842/console |


This message was automatically generated.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947954#comment-14947954
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #496 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/496/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947953#comment-14947953
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #496 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/496/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947929#comment-14947929
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #468 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/468/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947925#comment-14947925
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2439 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2439/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947913#comment-14947913
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2406 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2406/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8632:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, thx again Rakesh for the patch and Zhe for reviewing

> Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 3.0.0
>
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8632:
--
Summary: Add InterfaceAudience annotation to the erasure coding classes  
(was: Erasure Coding: Add InterfaceAudience annotation to the erasure coding 
classes)

> Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947907#comment-14947907
 ] 

Andrew Wang commented on HDFS-8632:
---

+1 LGTM will commit shortly, thanks Rakesh!

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947904#comment-14947904
 ] 

Andrew Wang commented on HDFS-9210:
---

+1 pending, thanks Xiaoyu

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9006) Provide BlockPlacementPolicy that supports upgrade domain

2015-10-07 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947903#comment-14947903
 ] 

Lei (Eddy) Xu commented on HDFS-9006:
-

Thanks for working on this, [~mingma]. In general, it looks great to me. 

A few small nitpicks:

* We can make class members in {{BlockPlacementPolicyWithUpgradeDomain}} and 
{{BlockPlacementStatusWithUpgradeDomain}} to be {{private final}}. 
* Since we are using JDK7 now, we can use type inference to make the code 
consistent with new code base, e.g.,
{{Set upgradeDomains = new HashSet();}} to {{... = new 
HashSet<>();}}
* Can you make some adjustments on the comments for 
{{BlockPlacementPolicyWithUpgradeDomain#pickupReplicaSet}}, it is a little bit 
hard to understand. I'd suggest to modify the comments in the following aspects:
1. Describes that it is for "picking the node for deleting the 
over-replicated replica" and meet the rack and upgrade domain policy. 
2. Make {{S1}} and {{S2}} terminology consistent to 
{{moreThanOneDomainSet}} and {{shareRackAndUPgradeDomainSet}}?
3. Is the order of choosing replicas: (rack + upgrade domain(UD)) > (no 
rack + UD) > (rack + no UD) > (no rack + no UD)?   Can we explicitly describe 
it in the comments? 
   4. should {{upgradeDomainFactor}} equal to {{replicaFactor}} in general ? 
e.g., does it make sense to set the default value to {{dfs.replication}} ?

Regarding of #3, what is the typical relationship between {{rack}} and 
{{upgrade domains}}. As we have priorities to choice block placement policy to 
satisfy these two requirements. Do you know what kind of implication it will 
have to the current data availability model? I'd love to hear more about your 
thoughts on this.

Again, thanks a lot for this great work! 


> Provide BlockPlacementPolicy that supports upgrade domain
> -
>
> Key: HDFS-9006
> URL: https://issues.apache.org/jira/browse/HDFS-9006
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9006.patch
>
>
> As part of the upgrade domain feature, we need to provide the actual upgrade 
> domain block placement.
> Namenode provides a mechanism to specify custom block placement policy. We 
> can use that to implement BlockPlacementPolicy with upgrade domain support.
> {noformat}
> 
> dfs.block.replicator.classname
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithUpgradeDomain
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 branch-2 backport

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9211:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to just branch-2, thanks Eric.

Any interest in fixing up the RAT issue too? It looks related to the 
hdfs-native-client refactor.

> Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 
> branch-2 backport
> ---
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.0
>
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 branch-2 backport

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9211:
--
Summary: Fix incorrect version in hadoop-hdfs-native-client/pom.xml from 
HDFS-9170 branch-2 backport  (was: Fix incorrect version in 
hadoop-hdfs-native-client/pom.xml from HDFS-9170)

> Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 
> branch-2 backport
> ---
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9211:
--
Summary: Fix incorrect version in hadoop-hdfs-native-client/pom.xml from 
HDFS-9170  (was: branch-2 build broken by incorrect version in 
hadoop-hdfs-native-client/pom.xml )

> Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170
> -
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947900#comment-14947900
 ] 

Andrew Wang commented on HDFS-9211:
---

+1 LGTM, will commit shortly, thanks Eric

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9211:
--
Affects Version/s: 2.8.0

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947896#comment-14947896
 ] 

Andrew Wang commented on HDFS-9110:
---

Also I just renamed the JIRA summary to be more descriptive :)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9110:
--
Summary: Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better 
efficiency  (was: Use Files.walkFileTree in doPreUpgrade for better efficiency)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in doPreUpgrade for better efficiency

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9110:
--
Summary: Use Files.walkFileTree in doPreUpgrade for better efficiency  
(was: Improve upon HDFS-8480)

> Use Files.walkFileTree in doPreUpgrade for better efficiency
> 
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9110) Improve upon HDFS-8480

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947892#comment-14947892
 ] 

Andrew Wang commented on HDFS-9110:
---

Hey Charlie, thanks for working on this, seems like a nice cleanup. Only have 
some minor comments, overall looks good:

* Can we avoid reorganizing the imports? This makes backporting harder.
* I think it'd be good to use the maxDepth variant of walkFileTree, since we're 
only doing a first-level listing. I think that lets us skip the preVisit step.

> Improve upon HDFS-8480
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947868#comment-14947868
 ] 

Hadoop QA commented on HDFS-9204:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 28s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 42s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 28s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 192m 48s | Tests passed in hadoop-hdfs. 
|
| | | 239m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765448/HDFS-9204.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12841/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12841/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12841/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12841/console |


This message was automatically generated.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-10-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947852#comment-14947852
 ] 

Tsuyoshi Ozawa commented on HDFS-8802:
--

[~gururaj] thank you for the updating. How about adding description about the 
types we can choose as the checksum in addition to the default value?
I think we can choose NULL, CRC32, CRC32C as the checksum.

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Gururaj Shetty
> Attachments: HDFS-8802.patch, HDFS-8802_01.patch, HDFS-8802_02.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-10-07 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HDFS-8802:
-
Status: Open  (was: Patch Available)

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Gururaj Shetty
> Attachments: HDFS-8802.patch, HDFS-8802_01.patch, HDFS-8802_02.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Status: Patch Available  (was: Open)

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Attachment: HDFS-9210.00.patch

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947827#comment-14947827
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1232 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1232/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947826#comment-14947826
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1232 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1232/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-4015:
---
Attachment: HDFS-4015.003.patch

[~arpitagarwal] Thanks for the review. I have fixed all issues flagged by you.

bq. We will likely see blocks with future generation stamps during intentional 
HDFS rollback. We should disable this check if NN has been restarted with a 
rollback option (either regular or rolling upgrade rollback).

Fixed this by setting shouldPostponeBlocksFromFuture  in rollback path.

bq. I apologize for not noticing this earlier. FsStatus is tagged as public and 
stable, so changing the constructor signature is incompatible. Instead we could 
add a new constructor that initializes bytesInFuture. This will also avoid 
changes to FileSystem, ViewFS, RawLocalFileSystem.
Thanks for catching this, I really appreciate it. I added a function in 
Distributed file system that returns this value instead of modifying FsStatus.

bq. fsck should also print this new counter. We can do it in a separate Jira.
Sure as soon as this JIRA is committed I will follow up with a JIRA and patch 
for that.

bq. Don't consider this a binding but I would really like it if bytesInFuture 
can be renamed especially where it is exposed via public interfaces/metrics. It 
sounds confusing/ominous. bytesWithFutureGenerationStamps would be more precise.

fixed - now counter looks like this via JMX -"BytesWithFutureGenerationStamps" 
: 1174853312,

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947809#comment-14947809
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #504 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/504/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1494#comment-1494
 ] 

Hadoop QA commented on HDFS-9211:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m  1s | Pre-patch branch-2 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   6m 15s | The applied patch generated  3  
additional warning messages. |
| {color:green}+1{color} | javadoc |  10m 21s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 18s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   1m 37s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |   0m 47s | Tests passed in 
hadoop-hdfs-native-client. |
| | |  37m 19s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765469/HDFS-9211-branch-2.001.patch
 |
| Optional Tests | javadoc javac unit |
| git revision | branch-2 / ad1f0f3 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/artifact/patchprocess/diffJavacWarnings.txt
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs-native-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/artifact/patchprocess/testrun_hadoop-hdfs-native-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/console |


This message was automatically generated.

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9199) rename dfs.namenode.replication.min to dfs.replication.min

2015-10-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947771#comment-14947771
 ] 

Mingliang Liu commented on HDFS-9199:
-

I'll close this if no more input in 3 days.Thanks.

> rename dfs.namenode.replication.min to dfs.replication.min
> --
>
> Key: HDFS-9199
> URL: https://issues.apache.org/jira/browse/HDFS-9199
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Mingliang Liu
>
> dfs.namenode.replication.min should be dfs.replication.min to match the other 
> dfs.replication config knobs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Affects Version/s: 2.7.1

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Component/s: datanode

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947759#comment-14947759
 ] 

Zhe Zhang commented on HDFS-9205:
-

Thanks Nicholas for the work. A few comments:
# bq. As a consequence, they cannot be replicated
Just to clarify, do you mean that even without the patch, those blocks won't be 
re-replicated, even though {{chooseUnderReplicatedBlocks}} returns them? Or 
they are re-replicated in the current logic, but they should not be (IIUC 
that's the case)?
# I agree that corrupt blocks are unreadable by HDFS client. But is there a use 
case for an admin to list corrupt blocks and reason about them by accessing the 
local {{blk_}} (and metadata) files? For example, there's a chance (although 
very rare) that the replica is intact and only the metadata file is corrupt.
# If we do want to save the replication work for corrupt blocks, should we get 
rid of {{QUEUE_WITH_CORRUPT_BLOCKS}} altogether?

Nit:
# This line of comment should be updated:
{code}
// and 5 blocks from QUEUE_WITH_CORRUPT_BLOCKS.
{code}

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947751#comment-14947751
 ] 

Hadoop QA commented on HDFS-9188:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   8m  7s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 12 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  1s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  4s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 191m  4s | Tests failed in hadoop-hdfs. |
| | | 214m  7s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.namenode.TestProcessCorruptBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765096/HDFS-9188.003.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12840/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12840/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12840/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12840/console |


This message was automatically generated.

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947735#comment-14947735
 ] 

Hadoop QA commented on HDFS-9137:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 23s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  1s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 53s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 35s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 17s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 23s | The applied patch generated  1 
new checkstyle issues (total was 142, now 142). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 189m 28s | Tests failed in hadoop-hdfs. |
| | | 235m 50s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRecoverStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765432/HDFSS-9137.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/console |


This message was automatically generated.

> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9205:
--
Attachment: h9205_20151008.patch

h9205_20151008.patch: treats block with readonly replicas but no normal 
replicas as highest priority for replication.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947719#comment-14947719
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8590 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8590/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947718#comment-14947718
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9205:
---

The failure of TestReadOnlySharedStorage actually is related -- the current 
implementation of read-only storage breaks the corrupt block definition.  It 
treats blocks with read-only replicas but no normal replicas as corrupt 
replicas.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9205:
--
Summary: Do not schedule corrupt blocks for replication  (was: Do not 
sehedule corrupted blocks for replication)

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947714#comment-14947714
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2438 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2438/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not sehedule corrupted blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947710#comment-14947710
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9205:
---

You are right.  Both hasNext() and next() need to advance the iterators.

> Do not sehedule corrupted blocks for replication
> 
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9209:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

The failed tests and release audit warning are unrelated. I've committed the 
patch to trunk. Thanks for the contribution, [~surendrasingh]!

> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9211:
-
Attachment: HDFS-9211-branch-2.001.patch

The error is as follows:
{noformat}
The project org.apache.hadoop:hadoop-hdfs-native-client:3.0.0-SNAPSHOT 
(.../hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml) has 1 error
{noformat}

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9211:
-
Status: Patch Available  (was: Open)

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947692#comment-14947692
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #503 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/503/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9170) Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client

2015-10-07 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947690#comment-14947690
 ] 

Eric Payne commented on HDFS-9170:
--

[~wheat9], the backport of this patch to branch-2 broke the build due to 
version in hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml. Please see 
HDFS-9211

> Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client
> ---
>
> Key: HDFS-9170
> URL: https://issues.apache.org/jira/browse/HDFS-9170
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HDFS-9170.000.patch, HDFS-9170.001.patch, 
> HDFS-9170.002.patch, HDFS-9170.003.patch, HDFS-9170.004.patch
>
>
> After HDFS-6200 the Java implementation of hdfs-client has be moved to a 
> separate hadoop-hdfs-client module.
> libhdfs, fuse-dfs and libwebhdfs still reside in the hadoop-hdfs module. 
> Ideally these modules should reside in the hadoop-hdfs-client. However, to 
> write unit tests for these components, it is often necessary to run 
> MiniDFSCluster which resides in the hadoop-hdfs module.
> This jira is to discuss how these native modules should layout after 
> HDFS-6200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reassigned HDFS-9211:


Assignee: Eric Payne

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9211:
-
Summary: branch-2 build broken by incorrect version in 
hadoop-hdfs-native-client/pom.xml   (was: branch-2 broken by incorrect version 
in hadoop-hdfs-native-client/pom.xml )

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9211) branch-2 broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)
Eric Payne created HDFS-9211:


 Summary: branch-2 broken by incorrect version in 
hadoop-hdfs-native-client/pom.xml 
 Key: HDFS-9211
 URL: https://issues.apache.org/jira/browse/HDFS-9211
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Payne


When HDFS-9170 was backported to branch-2, the version in 
hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9145) Tracking methods that hold FSNamesytemLock for too long

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9145:

Attachment: HDFS-9145.001.patch

The v1 patch implements the write lock with re-enter support. The read lock was 
removed from this patch as the holding time should be thread-aware.

> Tracking methods that hold FSNamesytemLock for too long
> ---
>
> Key: HDFS-9145
> URL: https://issues.apache.org/jira/browse/HDFS-9145
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9145.000.patch, HDFS-9145.001.patch
>
>
> It will be helpful that if we can have a way to track (or at least log a msg) 
> if some operation is holding the FSNamesystem lock for a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >