[jira] [Commented] (HDFS-9187) Fix null pointer error in Globber when FS was not constructed via FileSystem#createFileSystem

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956190#comment-14956190
 ] 

Hudson commented on HDFS-9187:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #492 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/492/])
HDFS-9187. Fix null pointer error in Globber when FS was not constructed 
(cmccabe: rev d286032b715192ddbdd770b07d623fdc396810e2)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
Add HDFS-9187 to CHANGES.txt (cmccabe: rev 
40cac59248f17c59fc819f4145cdeac9db309626)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix null pointer error in Globber when FS was not constructed via 
> FileSystem#createFileSystem
> -
>
> Key: HDFS-9187
> URL: https://issues.apache.org/jira/browse/HDFS-9187
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.8.0
>Reporter: stack
>Assignee: Colin Patrick McCabe
> Fix For: 2.8.0
>
> Attachments: HDFS-9187.001.patch, HDFS-9187.002.patch, 
> HDFS-9187.003.patch
>
>
> Saw this where an hbase that has not been updated to htrace-4.0.1 was trying 
> to start:
> {code}
> Oct 1, 5:12:11.861 AM FATAL org.apache.hadoop.hbase.master.HMaster
> Failed to become active master
> java.lang.NullPointerException
> at org.apache.hadoop.fs.Globber.glob(Globber.java:145)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1634)
> at org.apache.hadoop.hbase.util.FSUtils.getTableDirs(FSUtils.java:1372)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getAll(FSTableDescriptors.java:206)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:619)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-13 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956200#comment-14956200
 ] 

Ming Ma commented on HDFS-8647:
---

Thanks [~brahmareddy]!

Most of the change looks good. It seems this will also fix HDFS-9083. cc: 
[~shahrs87].

There are some questions about the striped erasure coding block placement 
abstraction.

* The existing {{blockHasEnoughRacksStriped}} compares {{getRealDataBlockNum}} 
(# of data blocks) with the number racks. But after the refactoring, it 
compares {{getRealTotalBlockNum}} (# of total blocks) with the number racks,
* It might be easier to not to pass {{isStriped}} to {{verifyBlockPlacement}}. 
Instead, have {{BlockPlacementPolicyRackFaultTolerant}} implement 
{{verifyBlockPlacement}} which will use {{numberOfReplicas}} as the 
{{minRacks}}. This also makes the patch more applicable to branch-2 which 
doesn't have striped EC.
* The current patch doesn't apply to branch-2. If you agree with the above 
changes, could you try out if it applies to branch-2? If it doesn't apply, you 
will need to provide a separate patch for branch-2 later.
* A general question about striped EC. It uses "# of racks >= # of data blocks" 
to check if a given block has enough racks. But what if "# of racks for the 
whole cluster < # of data blocks"? Say we use RS(6,3) and the cluster has 5 
racks. The write operation will spread the 9 blocks to 5 racks and succeed. But 
it will fail the "enough racks" check later in BM? But that has nothing to with 
the refactoring work here. I just want to bring it up in case others can chime 
in.


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956158#comment-14956158
 ] 

Hadoop QA commented on HDFS-9220:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m 23s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 30s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 36s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 48s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 40s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  66m  6s | Tests failed in hadoop-hdfs. |
| | | 117m 40s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.TestFileCreationClient |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestBlockReaderLocal |
| Timed out tests | org.apache.hadoop.hdfs.TestParallelShortCircuitRead |
|   | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | org.apache.hadoop.hdfs.TestDatanodeReport |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766428/HDFS-9220.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 40cac59 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12965/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12965/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12965/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12965/console |


This message was automatically generated.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9187) Fix null pointer error in Globber when FS was not constructed via FileSystem#createFileSystem

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956163#comment-14956163
 ] 

Hudson commented on HDFS-9187:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1261 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1261/])
HDFS-9187. Fix null pointer error in Globber when FS was not constructed 
(cmccabe: rev d286032b715192ddbdd770b07d623fdc396810e2)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
Add HDFS-9187 to CHANGES.txt (cmccabe: rev 
40cac59248f17c59fc819f4145cdeac9db309626)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix null pointer error in Globber when FS was not constructed via 
> FileSystem#createFileSystem
> -
>
> Key: HDFS-9187
> URL: https://issues.apache.org/jira/browse/HDFS-9187
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.8.0
>Reporter: stack
>Assignee: Colin Patrick McCabe
> Fix For: 2.8.0
>
> Attachments: HDFS-9187.001.patch, HDFS-9187.002.patch, 
> HDFS-9187.003.patch
>
>
> Saw this where an hbase that has not been updated to htrace-4.0.1 was trying 
> to start:
> {code}
> Oct 1, 5:12:11.861 AM FATAL org.apache.hadoop.hbase.master.HMaster
> Failed to become active master
> java.lang.NullPointerException
> at org.apache.hadoop.fs.Globber.glob(Globber.java:145)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1634)
> at org.apache.hadoop.hbase.util.FSUtils.getTableDirs(FSUtils.java:1372)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getAll(FSTableDescriptors.java:206)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:619)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9187) Fix null pointer error in Globber when FS was not constructed via FileSystem#createFileSystem

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956178#comment-14956178
 ] 

Hudson commented on HDFS-9187:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2430 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2430/])
HDFS-9187. Fix null pointer error in Globber when FS was not constructed 
(cmccabe: rev d286032b715192ddbdd770b07d623fdc396810e2)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
Add HDFS-9187 to CHANGES.txt (cmccabe: rev 
40cac59248f17c59fc819f4145cdeac9db309626)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix null pointer error in Globber when FS was not constructed via 
> FileSystem#createFileSystem
> -
>
> Key: HDFS-9187
> URL: https://issues.apache.org/jira/browse/HDFS-9187
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.8.0
>Reporter: stack
>Assignee: Colin Patrick McCabe
> Fix For: 2.8.0
>
> Attachments: HDFS-9187.001.patch, HDFS-9187.002.patch, 
> HDFS-9187.003.patch
>
>
> Saw this where an hbase that has not been updated to htrace-4.0.1 was trying 
> to start:
> {code}
> Oct 1, 5:12:11.861 AM FATAL org.apache.hadoop.hbase.master.HMaster
> Failed to become active master
> java.lang.NullPointerException
> at org.apache.hadoop.fs.Globber.glob(Globber.java:145)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1634)
> at org.apache.hadoop.hbase.util.FSUtils.getTableDirs(FSUtils.java:1372)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getAll(FSTableDescriptors.java:206)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:619)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1481)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-10-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-6101:
--
Attachment: HDFS-6101.002.patch

Rev2 removes the empty line with space. This patch only fixes the flaky test, 
and therefore the failed tests are unrelated.

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, HDFS-6101.002.patch, 
> TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956247#comment-14956247
 ] 

Hadoop QA commented on HDFS-6101:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   9m  5s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m 10s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 53s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  56m  8s | Tests failed in hadoop-hdfs. |
| | |  82m 51s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766457/HDFS-6101.002.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 40cac59 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12969/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12969/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12969/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12969/console |


This message was automatically generated.

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, HDFS-6101.002.patch, 
> TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9070) Allow fsck display pending replica location information for being-written blocks

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956183#comment-14956183
 ] 

Hadoop QA commented on HDFS-9070:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 21s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 29s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 33s | The applied patch generated  3 
new checkstyle issues (total was 118, now 116). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 35s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 31s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  50m 46s | Tests failed in hadoop-hdfs. |
| | |  99m 51s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend |
| Timed out tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766435/HDFS-9070-trunk.05.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 40cac59 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12967/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12967/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12967/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12967/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12967/console |


This message was automatically generated.

> Allow fsck display pending replica location information for being-written 
> blocks
> 
>
> Key: HDFS-9070
> URL: https://issues.apache.org/jira/browse/HDFS-9070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-9070--HDFS-7285.00.patch, 
> HDFS-9070-HDFS-7285.00.patch, HDFS-9070-HDFS-7285.01.patch, 
> HDFS-9070-HDFS-7285.02.patch, HDFS-9070-trunk.03.patch, 
> HDFS-9070-trunk.04.patch, HDFS-9070-trunk.05.patch
>
>
> When a EC file is being written, it can be helpful to allow fsck display 
> datanode information of the being-written EC file block group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956189#comment-14956189
 ] 

Hudson commented on HDFS-9167:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #492 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/492/])
HDFS-9167. Update pom.xml in other modules to depend on hdfs-client (wheat9: 
rev da8441d0fe9149bb845dcf701fdc86e786b6afba)
* hadoop-tools/hadoop-extras/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
* hadoop-tools/hadoop-streaming/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* hadoop-client/pom.xml
* hadoop-tools/hadoop-distcp/pom.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* hadoop-tools/hadoop-archives/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-tools/hadoop-gridmix/pom.xml
* hadoop-tools/hadoop-datajoin/pom.xml
* hadoop-dist/pom.xml
* hadoop-tools/hadoop-rumen/pom.xml
* hadoop-tools/hadoop-ant/pom.xml
* hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml
* hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java


> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9167.000.patch, HDFS-9167.001.patch, 
> HDFS-9167.002.patch, test-patch.Yetus.002.log
>
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-13 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_5.patch

patch to fix white space

Test fails are not related to this patch

thanks

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch, HDFS-9157_5.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9239) DataNode Lifeline Protocol: an alternative protocol for reporting DataNode liveness

2015-10-13 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-9239:
---

 Summary: DataNode Lifeline Protocol: an alternative protocol for 
reporting DataNode liveness
 Key: HDFS-9239
 URL: https://issues.apache.org/jira/browse/HDFS-9239
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: DataNode-Lifeline-Protocol.pdf

This issue proposes introduction of a new feature: the DataNode Lifeline 
Protocol.  This is an RPC protocol that is responsible for reporting liveness 
and basic health information about a DataNode to a NameNode.  Compared to the 
existing heartbeat messages, it is lightweight and not prone to resource 
contention problems that can harm accurate tracking of DataNode liveness 
currently.  The attached design document contains more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9239) DataNode Lifeline Protocol: an alternative protocol for reporting DataNode liveness

2015-10-13 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9239:

Attachment: DataNode-Lifeline-Protocol.pdf

> DataNode Lifeline Protocol: an alternative protocol for reporting DataNode 
> liveness
> ---
>
> Key: HDFS-9239
> URL: https://issues.apache.org/jira/browse/HDFS-9239
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: DataNode-Lifeline-Protocol.pdf
>
>
> This issue proposes introduction of a new feature: the DataNode Lifeline 
> Protocol.  This is an RPC protocol that is responsible for reporting liveness 
> and basic health information about a DataNode to a NameNode.  Compared to the 
> existing heartbeat messages, it is lightweight and not prone to resource 
> contention problems that can harm accurate tracking of DataNode liveness 
> currently.  The attached design document contains more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-13 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956287#comment-14956287
 ] 

Surendra Singh Lilhore commented on HDFS-8630:
--

Failed tests are unrelated. Please review..

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, 
> HDFS-8630.003.patch, HDFS-8630.004.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Status: Patch Available  (was: Open)

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956308#comment-14956308
 ] 

Hadoop QA commented on HDFS-9157:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m 11s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 46s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 17s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 36s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 43s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  66m 52s | Tests failed in hadoop-hdfs. |
| | | 117m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.datanode.TestBpServiceActorScheduler |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766460/HDFS-9157_5.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 40cac59 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12970/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12970/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12970/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12970/console |


This message was automatically generated.

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch, HDFS-9157_5.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Attachment: HDFS-9231.001.patch

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956166#comment-14956166
 ] 

Hadoop QA commented on HDFS-6101:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   7m 51s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  1s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 24s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  2s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  51m 25s | Tests failed in hadoop-hdfs. |
| | |  74m 38s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766437/HDFS-6101.001.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 40cac59 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12966/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12966/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12966/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12966/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12966/console |


This message was automatically generated.

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956222#comment-14956222
 ] 

Hadoop QA commented on HDFS-9220:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  24m 15s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |  10m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 31s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 58s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 58s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 48s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 17s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   5m  6s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  86m 25s | Tests failed in hadoop-hdfs. |
| | | 148m 26s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.TestBlockReaderLocal |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.namenode.TestSecurityTokenEditLog |
| Timed out tests | org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766442/HDFS-9220.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 40cac59 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12968/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12968/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12968/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12968/console |


This message was automatically generated.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-13 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-9234:


 Summary: WebHdfs : getContentSummary() should give quota for 
storage types
 Key: HDFS-9234
 URL: https://issues.apache.org/jira/browse/HDFS-9234
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.7.1
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


Currently webhdfs API for ContentSummary give only namequota and spacequota but 
it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8631) WebHDFS : Support list/setQuota

2015-10-13 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8631:
-
Attachment: HDFS-8631-001.patch

Attached Patch...

Added two new webhdfs API for set quota.
{code}
public void setQuota(Path path, long namespaceQuota, long storagespaceQuota)
public void setQuotaByStorageType(Path path, StorageType type, long quota)
{code}
For getting quota already {{getContentSummary()}} API available

Please review...

> WebHDFS : Support list/setQuota
> ---
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8631-001.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8631) WebHDFS : Support list/setQuota

2015-10-13 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8631:
-
Status: Patch Available  (was: Open)

> WebHDFS : Support list/setQuota
> ---
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8631-001.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9224) TestFileTruncate fails intermittently with BindException

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954613#comment-14954613
 ] 

Hudson commented on HDFS-9224:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #528 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/528/])
HDFS-9224. TestFileTruncate fails intermittently with BindException 
(vinayakumarb: rev 69b025dbbaa44395e49d1c04b90e1f65f0fc1132)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


> TestFileTruncate fails intermittently with BindException
> 
>
> Key: HDFS-9224
> URL: https://issues.apache.org/jira/browse/HDFS-9224
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9224-002.patch, HDFS-9224.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/478/#showFailuresLink
> {noformat}
> java.net.BindException: Problem binding to [localhost:8020] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:414)
>   at sun.nio.ch.Net.bind(Net.java:406)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:646)
>   at org.apache.hadoop.ipc.Server.(Server.java:2399)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1248)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1017)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:889)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:821)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:480)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setUp(TestFileTruncate.java:107)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2015-10-13 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954670#comment-14954670
 ] 

Li Bo commented on HDFS-8449:
-

Thanks [~rakeshr]'s review. The failed tests seem unrelated with this patch.  
Hi, [~jingzhao], could you help me review the patch and commit it to trunk if 
it's ok ? Then we can work on other metric jiras. Thanks.

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9233) Create LICENSE.txt and NOTICES files for libhdfs++

2015-10-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954637#comment-14954637
 ] 

Steve Loughran commented on HDFS-9233:
--

Make sure that the RAT license checks are set up to handle this; we just had a 
lot of grief with the EC merge

> Create LICENSE.txt and NOTICES files for libhdfs++
> --
>
> Key: HDFS-9233
> URL: https://issues.apache.org/jira/browse/HDFS-9233
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Mingliang Liu
>
> We use third-party libraries that are Apache and Google licensed, and may be 
> adding an MIT-licenced third-party library.  We need to include the 
> appropriate license files for inclusion into Apache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9224) TestFileTruncate fails intermittently with BindException

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954680#comment-14954680
 ] 

Hudson commented on HDFS-9224:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1253 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1253/])
HDFS-9224. TestFileTruncate fails intermittently with BindException 
(vinayakumarb: rev 69b025dbbaa44395e49d1c04b90e1f65f0fc1132)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestFileTruncate fails intermittently with BindException
> 
>
> Key: HDFS-9224
> URL: https://issues.apache.org/jira/browse/HDFS-9224
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9224-002.patch, HDFS-9224.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/478/#showFailuresLink
> {noformat}
> java.net.BindException: Problem binding to [localhost:8020] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:414)
>   at sun.nio.ch.Net.bind(Net.java:406)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:646)
>   at org.apache.hadoop.ipc.Server.(Server.java:2399)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1248)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1017)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:889)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:821)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:480)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setUp(TestFileTruncate.java:107)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8575) Support User level Quota for space and Name (count)

2015-10-13 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel reassigned HDFS-8575:
---

Assignee: (was: nijel)

keeping it unassigned as no work planned.

> Support User level Quota for space and Name (count)
> ---
>
> Key: HDFS-8575
> URL: https://issues.apache.org/jira/browse/HDFS-8575
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: nijel
>
> I would like to have one feature in HDFS to have quota management at user 
> level. 
> Background :
> When the customer uses a multi tenant solution it will have many Hadoop eco 
> system components like HIVE, HBASE, yarn etc. The base folder of these 
> components are different like /hive - Hive , /hbase -HBase. 
> Now if a user creates some file  or table these will be under the folder 
> specific to component. If the user name is taken into account it looks like
> {code}
> /hive/user1/table1
> /hive/user2/table1
> /hbase/user1/Htable1
> /hbase/user2/Htable1
>  
> Same for yarn/map-reduce data and logs
> {code}
>  
> In this case restricting the user to use a certain amount of disk/file is 
> very difficult since the current quota management is at folder level.
>  
> Requirement: User level Quota for space and Name (count). Say user1 can have 
> 100G irrespective of the folder or location used.
>  
> Here the idea to consider the file owner ad the key and attribute the quota 
> to it.  So the current quota system can have a initial check for the user 
> quota if defined, before validating the folder quota.
> Note:
> This need a change in fsimage to store the user and quota information
> Please have a look on this scenario. If it sounds good, i will create the 
> tasks and the update the design and prototype.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954494#comment-14954494
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 23s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 22s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 25s | The applied patch generated  7 
new checkstyle issues (total was 202, now 205). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 34s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 187m  5s | Tests failed in hadoop-hdfs. |
| | | 234m  5s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766253/h9205_20151013.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / c60a16f |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/console |


This message was automatically generated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-10-13 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954510#comment-14954510
 ] 

nijel commented on HDFS-9046:
-

thanks [~vinayrpet] for your time

[~cnauroth], please have a review of the this change.

> Any Error during BPOfferService run can leads to Missing DN.
> 
>
> Key: HDFS-9046
> URL: https://issues.apache.org/jira/browse/HDFS-9046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch, HDFS-9046_3.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9224) TestFileTruncate fails intermittently with BindException

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954557#comment-14954557
 ] 

Hudson commented on HDFS-9224:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2462 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2462/])
HDFS-9224. TestFileTruncate fails intermittently with BindException 
(vinayakumarb: rev 69b025dbbaa44395e49d1c04b90e1f65f0fc1132)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


> TestFileTruncate fails intermittently with BindException
> 
>
> Key: HDFS-9224
> URL: https://issues.apache.org/jira/browse/HDFS-9224
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9224-002.patch, HDFS-9224.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/478/#showFailuresLink
> {noformat}
> java.net.BindException: Problem binding to [localhost:8020] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:414)
>   at sun.nio.ch.Net.bind(Net.java:406)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:646)
>   at org.apache.hadoop.ipc.Server.(Server.java:2399)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1248)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1017)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:889)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:821)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:480)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setUp(TestFileTruncate.java:107)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9224) TestFileTruncate fails intermittently with BindException

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954514#comment-14954514
 ] 

Hudson commented on HDFS-9224:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #516 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/516/])
HDFS-9224. TestFileTruncate fails intermittently with BindException 
(vinayakumarb: rev 69b025dbbaa44395e49d1c04b90e1f65f0fc1132)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestFileTruncate fails intermittently with BindException
> 
>
> Key: HDFS-9224
> URL: https://issues.apache.org/jira/browse/HDFS-9224
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9224-002.patch, HDFS-9224.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/478/#showFailuresLink
> {noformat}
> java.net.BindException: Problem binding to [localhost:8020] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:414)
>   at sun.nio.ch.Net.bind(Net.java:406)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:646)
>   at org.apache.hadoop.ipc.Server.(Server.java:2399)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1248)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1017)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:889)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:821)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:480)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setUp(TestFileTruncate.java:107)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9224) TestFileTruncate fails intermittently with BindException

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954682#comment-14954682
 ] 

Hudson commented on HDFS-9224:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2426 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2426/])
HDFS-9224. TestFileTruncate fails intermittently with BindException 
(vinayakumarb: rev 69b025dbbaa44395e49d1c04b90e1f65f0fc1132)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


> TestFileTruncate fails intermittently with BindException
> 
>
> Key: HDFS-9224
> URL: https://issues.apache.org/jira/browse/HDFS-9224
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9224-002.patch, HDFS-9224.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/478/#showFailuresLink
> {noformat}
> java.net.BindException: Problem binding to [localhost:8020] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:414)
>   at sun.nio.ch.Net.bind(Net.java:406)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:646)
>   at org.apache.hadoop.ipc.Server.(Server.java:2399)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1248)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1017)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:889)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:821)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:480)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setUp(TestFileTruncate.java:107)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9224) TestFileTruncate fails intermittently with BindException

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954508#comment-14954508
 ] 

Hudson commented on HDFS-9224:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8615 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8615/])
HDFS-9224. TestFileTruncate fails intermittently with BindException 
(vinayakumarb: rev 69b025dbbaa44395e49d1c04b90e1f65f0fc1132)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestFileTruncate fails intermittently with BindException
> 
>
> Key: HDFS-9224
> URL: https://issues.apache.org/jira/browse/HDFS-9224
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9224-002.patch, HDFS-9224.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/478/#showFailuresLink
> {noformat}
> java.net.BindException: Problem binding to [localhost:8020] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:414)
>   at sun.nio.ch.Net.bind(Net.java:406)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:646)
>   at org.apache.hadoop.ipc.Server.(Server.java:2399)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1248)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1017)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:889)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:821)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:480)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setUp(TestFileTruncate.java:107)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-13 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9234:
-
Status: Patch Available  (was: Open)

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-13 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9234:
-
Attachment: HDFS-9234-001.patch

Attached patch...
Please review...

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954912#comment-14954912
 ] 

Vinayakumar B commented on HDFS-8630:
-

Thanks [~surendrasingh] for the update.

Patch looks almost good.
only nits.

1. {{webHfsPolicy}} typo, in test
2. I think, to make test run faster, you need not run datanodes for these 
tests. You can create empty file to verify setting storagepolicy.

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, 
> HDFS-8630.003.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9139:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954970#comment-14954970
 ] 

Vinayakumar B commented on HDFS-9139:
-

Committed to trunk and branch-2.
Thanks [~cnauroth] for most part of the contribution.
Thanks All for the support.

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9139:

Component/s: test

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954991#comment-14954991
 ] 

Hudson commented on HDFS-9160:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2464 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2464/])
HDFS-9160. [OIV-Doc] : Missing details of 'delimited' for processor 
(vinayakumarb: rev caa711b660ce73c0f6bf97e3499d157a3a2daaea)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8631) WebHDFS : Support list/setQuota

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954907#comment-14954907
 ] 

Hadoop QA commented on HDFS-8631:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  24m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 45s | There were no new javac warning 
messages. |
| {color:red}-1{color} | javadoc |  11m 21s | The applied patch generated  3  
additional warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 55s | The applied patch generated  1 
new checkstyle issues (total was 140, now 140). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 8  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 47s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   7m  8s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   7m 49s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 198m 13s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 33s | Tests passed in 
hadoop-hdfs-client. |
| | | 264m 26s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.TestFilterFileSystem |
|   | hadoop.fs.TestHarFileSystem |
|   | hadoop.ipc.TestIPC |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766286/HDFS-8631-001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5b6bae0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12949/artifact/patchprocess/diffJavadocWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12949/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12949/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12949/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12949/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12949/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12949/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12949/console |


This message was automatically generated.

> WebHDFS : Support list/setQuota
> ---
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8631-001.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-10-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954934#comment-14954934
 ] 

Vinayakumar B commented on HDFS-6440:
-

Is this support can be merged to branch-2?

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954941#comment-14954941
 ] 

Hudson commented on HDFS-9160:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #518 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/518/])
HDFS-9160. [OIV-Doc] : Missing details of 'delimited' for processor 
(vinayakumarb: rev caa711b660ce73c0f6bf97e3499d157a3a2daaea)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md


> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9044) Give Priority to FavouredNodes , before selecting nodes from FavouredNode's Node Group

2015-10-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954827#comment-14954827
 ] 

Vinayakumar B commented on HDFS-9044:
-

Hi [~andreina], thanks for the patch,

Patch looks almost good.

Here are few minor comments,

1. Below lines from 
{{BlockPlacementPolicyWithNodeGroup#chooseFavouredNodes(..)}} could be just 
replaced with {{super.chooseFavouredNodes(..)}}
{code}+for (int i = 0; i < favoredNodes.size() && results.size() < 
numOfReplicas;
+i++) {
+  DatanodeDescriptor favoredNode = favoredNodes.get(i);
+  // Choose a single node which is local to favoredNode.
+  // 'results' is updated within chooseLocalNode
+  DatanodeStorageInfo target = null;
+  try {
+target =
+chooseLocalStorage(favoredNode, favoriteAndExcludedNodes,
+  blocksize, maxNodesPerRack, results, avoidStaleNodes,
+  storageTypes, false);
+  } catch (NotEnoughReplicasException e) {
+// catch Exception and continue with other favored nodes
+continue;
+  }
+  if (target == null) {
+LOG.warn("Could not find a target for file " + src
++ " with favored node " + favoredNode);
+continue;
+  }
+  favoriteAndExcludedNodes.add(target.getDatanodeDescriptor());
+}{code}

2. {{checkFavoredNodePartOfTarget(..)}} could be renamed to {{isNodeChosen(..)}}

3. {{clusterMapNodeGroup.getNodeGroup(favoredNode.getNetworkLocation())}} can 
be extracted to a variable {{scope}} for better readability

4. I think its better to update javadoc for 
{{BlockPlacementPolicyWithNodeGroup#chooseLocalStorage(..)}} by mentioning, 
fallback to nodegroup/rack will happen if flag 
{{fallbackToNodeGroupAndLocalRack}} is set.

> Give Priority to FavouredNodes , before selecting nodes from FavouredNode's 
> Node Group
> --
>
> Key: HDFS-9044
> URL: https://issues.apache.org/jira/browse/HDFS-9044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
> Attachments: HDFS-9044.1.patch, HDFS-9044.2.patch, HDFS-9044.3.patch, 
> HDFS-9044.4.patch
>
>
> Passing Favored nodes intention is to place replica among the favored node
> Current behavior in Node group is 
>   If favored node is not available it goes to one among favored 
> nodegroup. 
> {noformat}
> Say for example:
>   1)I need 3 replicas and passed 5 favored nodes.
>   2)Out of 5 favored nodes 3 favored nodes are not good.
>   3)Then based on BlockPlacementPolicyWithNodeGroup out of 5 targets node 
> returned , 3 will be random node from 3 bad FavoredNode's nodegroup. 
>   4)Then there is a probability that all my 3 replicas are placed on 
> Random node from FavoredNodes's nodegroup , instead of giving priority to 2 
> favored nodes returned as target.
> {noformat}
> *Instead of returning 5 targets on 3rd step above , we can return 2 good 
> favored nodes as target*
> *And remaining 1 needed replica can be chosen from Random node of bad 
> FavoredNodes's nodegroup.*
> This will make sure that the FavoredNodes are given priority.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954879#comment-14954879
 ] 

Hudson commented on HDFS-9160:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8617 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8617/])
HDFS-9160. [OIV-Doc] : Missing details of 'delimited' for processor 
(vinayakumarb: rev caa711b660ce73c0f6bf97e3499d157a3a2daaea)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9224) TestFileTruncate fails intermittently with BindException

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954792#comment-14954792
 ] 

Hudson commented on HDFS-9224:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #488 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/488/])
HDFS-9224. TestFileTruncate fails intermittently with BindException 
(vinayakumarb: rev 69b025dbbaa44395e49d1c04b90e1f65f0fc1132)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestFileTruncate fails intermittently with BindException
> 
>
> Key: HDFS-9224
> URL: https://issues.apache.org/jira/browse/HDFS-9224
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9224-002.patch, HDFS-9224.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/478/#showFailuresLink
> {noformat}
> java.net.BindException: Problem binding to [localhost:8020] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:414)
>   at sun.nio.ch.Net.bind(Net.java:406)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:646)
>   at org.apache.hadoop.ipc.Server.(Server.java:2399)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1248)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1017)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:889)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:821)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:480)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.setUp(TestFileTruncate.java:107)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7264) Tha last datanode in a pipeline should send a heartbeat when there is no traffic

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954790#comment-14954790
 ] 

Hadoop QA commented on HDFS-7264:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  26m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |  10m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 30s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 37s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 39s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   2m 13s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 44s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   6m 36s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 214m 55s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 43s | Tests passed in 
hadoop-hdfs-client. |
| | | 284m 10s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.server.namenode.ha.TestHAFsck |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766265/h7264_20151012.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 69b025d |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12948/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12948/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12948/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12948/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12948/console |


This message was automatically generated.

> Tha last datanode in a pipeline should send a heartbeat when there is no 
> traffic
> 
>
> Key: HDFS-7264
> URL: https://issues.apache.org/jira/browse/HDFS-7264
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>  Labels: BB2015-05-TBR
> Attachments: h7264_20141017.patch, h7264_20141020.patch, 
> h7264_20151012.patch
>
>
> When the client is writing slowly, the client will send a heartbeat to signal 
> that the connection is still alive.  This case works fine.
> However, when a client is writing fast but some of the datanodes in the 
> pipeline are busy, a PacketResponder may get a timeout since no ack is sent 
> from the upstream datanode.  We suggest that the last datanode in a pipeline 
> should send a heartbeat when there is no traffic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-10-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954855#comment-14954855
 ] 

Vinayakumar B commented on HDFS-9160:
-

+1

> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954851#comment-14954851
 ] 

Vinayakumar B commented on HDFS-9157:
-


Thanks [~nijel] for the update on the patch.

Have few nits.

1. In OEV, redundant comment can be removed, similar to OIV change.
{code}
-if(cmd.hasOption("h")) { // print help and exit
+if (cmd.hasOption("h")) { // print help and exit
+  // print help and exit with non zero exit code since
+  // it is not expected to give help and other options together.
   printHelp();
{code}

2. {{isHelpOption(..)}} it would look better, if this method can take only one 
option as input instead of array.


I noticed that return code is changed to 0 in OEV when no args passed. This 
behaviour is similar to OIV. IMO this is fine.

Rest all is fine.
+1 once above nits addressed

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-10-13 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9160:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.
Thanks [~nijel].

> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9057) allow/disallow snapshots via webhdfs

2015-10-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954885#comment-14954885
 ] 

Vinayakumar B commented on HDFS-9057:
-

Hi [~brahmareddy],

Thanks for updating the patch.

For allow/disallow ops, you can also add a test like below.

1. AllowSnapshots using webhdfs and check whether 
dfs.getSnapshottableDirListing() lists new snapshottable dir?
2. DisallowSnapshots using webhdfs and check again 
dfs.getSnapshottableDirListing(), it should not list now.

Rest all looks good.

[~wheat9], do you want to take a look?

> allow/disallow snapshots via webhdfs
> 
>
> Key: HDFS-9057
> URL: https://issues.apache.org/jira/browse/HDFS-9057
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9057-002.patch, HDFS-9057.patch
>
>
> We should be able to allow and disallow directories for snapshotting via 
> WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955040#comment-14955040
 ] 

Hudson commented on HDFS-9139:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #519 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/519/])
HDFS-9139. Enable parallel JUnit tests for HDFS Pre-commit (Contributed 
(vinayakumarb: rev 39581e3be2aaeb1eeb7fb98b6bdecd8d4e3c7269)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* dev-support/test-patch.sh
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/oauth2/TestClientCredentialTimeBasedTokenRefresher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestSeveralNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml


> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9232) Shouldn't start block recovery if block has no enough replicas

2015-10-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955039#comment-14955039
 ] 

Kihwal Lee commented on HDFS-9232:
--

It seems related to HDFS-8344 or may even be a dupe.

> Shouldn't start block recovery if block has no enough replicas
> --
>
> Key: HDFS-9232
> URL: https://issues.apache.org/jira/browse/HDFS-9232
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9232.01.patch
>
>
> from HDFS-8406:
> {quote}
> Before primary DN calls commitBlockSynchronization, it synchronized 2 RBW 
> replicas, and make them finalized. Then primary DN calls 
> commitBlockSynchronization, to complete the lastBlock and close the file. The 
> question is, your dfs.namenode.replication.min is 3, the last block can't be 
> completed. NameNode shouldn't issue blockRecovery in the first place because 
> lastBlock can't be completed anyway.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955090#comment-14955090
 ] 

Hudson commented on HDFS-9139:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2465 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2465/])
HDFS-9139. Enable parallel JUnit tests for HDFS Pre-commit (Contributed 
(vinayakumarb: rev 39581e3be2aaeb1eeb7fb98b6bdecd8d4e3c7269)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestSeveralNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/oauth2/TestClientCredentialTimeBasedTokenRefresher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* dev-support/test-patch.sh


> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955091#comment-14955091
 ] 

Hudson commented on HDFS-9160:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #530 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/530/])
HDFS-9160. [OIV-Doc] : Missing details of 'delimited' for processor 
(vinayakumarb: rev caa711b660ce73c0f6bf97e3499d157a3a2daaea)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9235) hdfs-native-client build getting errors when built with cmake 2.6

2015-10-13 Thread Eric Payne (JIRA)
Eric Payne created HDFS-9235:


 Summary: hdfs-native-client build getting errors when built with 
cmake 2.6
 Key: HDFS-9235
 URL: https://issues.apache.org/jira/browse/HDFS-9235
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0, 2.7.2
Reporter: Eric Payne
Assignee: Eric Payne
Priority: Minor


During the hdfs-native-client code move done as part of HDFS-9170, the cmake 
minimum version was changed from 2.6 to 2.8. This JIRA will change the value 
back to 2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955102#comment-14955102
 ] 

Hudson commented on HDFS-8676:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8619 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8619/])
HDFS-8676. Delayed rolling upgrade finalization can cause heartbeat (kihwal: 
rev 5b43db47a313decccdcca8f45c5708aab46396df)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Delayed rolling upgrade finalization can cause heartbeat expiration
> ---
>
> Key: HDFS-8676
> URL: https://issues.apache.org/jira/browse/HDFS-8676
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Walter Su
>Priority: Critical
> Attachments: HDFS-8676.01.patch, HDFS-8676.02.patch
>
>
> In big busy clusters where the deletion rate is also high, a lot of blocks 
> can pile up in the datanode trash directories until an upgrade is finalized.  
> When it is finally finalized, the deletion of trash is done in the service 
> actor thread's context synchronously.  This blocks the heartbeat and can 
> cause heartbeat expiration.  
> We have seen a namenode losing hundreds of nodes after a delayed upgrade 
> finalization.  The deletion of trash directories should be made asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9235) hdfs-native-client build getting errors when built with cmake 2.6

2015-10-13 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9235:
-
Attachment: HDFS-9235.001.patch

> hdfs-native-client build getting errors when built with cmake 2.6
> -
>
> Key: HDFS-9235
> URL: https://issues.apache.org/jira/browse/HDFS-9235
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.7.2
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: HDFS-9235.001.patch
>
>
> During the hdfs-native-client code move done as part of HDFS-9170, the cmake 
> minimum version was changed from 2.6 to 2.8. This JIRA will change the value 
> back to 2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-13 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9157:

Attachment: HDFS-9157_4.patch

thanks [~vinayrpet] for the review
Updated patch with the comments

Please have a look.

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9235) hdfs-native-client build getting errors when built with cmake 2.6

2015-10-13 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9235:
-
Target Version/s: 3.0.0, 2.8.0  (was: 3.0.0, 2.7.2)

> hdfs-native-client build getting errors when built with cmake 2.6
> -
>
> Key: HDFS-9235
> URL: https://issues.apache.org/jira/browse/HDFS-9235
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: HDFS-9235.001.patch
>
>
> During the hdfs-native-client code move done as part of HDFS-9170, the cmake 
> minimum version was changed from 2.6 to 2.8. This JIRA will change the value 
> back to 2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9235) hdfs-native-client build getting errors when built with cmake 2.6

2015-10-13 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9235:
-
Affects Version/s: (was: 2.7.2)
   2.8.0

> hdfs-native-client build getting errors when built with cmake 2.6
> -
>
> Key: HDFS-9235
> URL: https://issues.apache.org/jira/browse/HDFS-9235
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: HDFS-9235.001.patch
>
>
> During the hdfs-native-client code move done as part of HDFS-9170, the cmake 
> minimum version was changed from 2.6 to 2.8. This JIRA will change the value 
> back to 2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-9235) hdfs-native-client build getting errors when built with cmake 2.6

2015-10-13 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9235 started by Eric Payne.

> hdfs-native-client build getting errors when built with cmake 2.6
> -
>
> Key: HDFS-9235
> URL: https://issues.apache.org/jira/browse/HDFS-9235
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: HDFS-9235.001.patch
>
>
> During the hdfs-native-client code move done as part of HDFS-9170, the cmake 
> minimum version was changed from 2.6 to 2.8. This JIRA will change the value 
> back to 2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HDFS-9235) hdfs-native-client build getting errors when built with cmake 2.6

2015-10-13 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9235 stopped by Eric Payne.

> hdfs-native-client build getting errors when built with cmake 2.6
> -
>
> Key: HDFS-9235
> URL: https://issues.apache.org/jira/browse/HDFS-9235
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: HDFS-9235.001.patch
>
>
> During the hdfs-native-client code move done as part of HDFS-9170, the cmake 
> minimum version was changed from 2.6 to 2.8. This JIRA will change the value 
> back to 2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-13 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8647:
---
Attachment: HDFS-8647-005.patch

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration

2015-10-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955114#comment-14955114
 ] 

Kihwal Lee commented on HDFS-8676:
--

To fix this in 2.7, we need to bring in HDFS-7645 and HDFS-8656.

> Delayed rolling upgrade finalization can cause heartbeat expiration
> ---
>
> Key: HDFS-8676
> URL: https://issues.apache.org/jira/browse/HDFS-8676
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Walter Su
>Priority: Critical
> Attachments: HDFS-8676.01.patch, HDFS-8676.02.patch
>
>
> In big busy clusters where the deletion rate is also high, a lot of blocks 
> can pile up in the datanode trash directories until an upgrade is finalized.  
> When it is finally finalized, the deletion of trash is done in the service 
> actor thread's context synchronously.  This blocks the heartbeat and can 
> cause heartbeat expiration.  
> We have seen a namenode losing hundreds of nodes after a delayed upgrade 
> finalization.  The deletion of trash directories should be made asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9235) hdfs-native-client build getting errors when built with cmake 2.6

2015-10-13 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9235:
-
Status: Patch Available  (was: Open)

[~andrew.wang] and [~wheat9], referencing [the comment from 
HDFS-9170|https://issues.apache.org/jira/browse/HDFS-9170?focusedCommentId=14954188=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14954188],
 I would like to request that you provide feedback for this change.

> hdfs-native-client build getting errors when built with cmake 2.6
> -
>
> Key: HDFS-9235
> URL: https://issues.apache.org/jira/browse/HDFS-9235
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: HDFS-9235.001.patch
>
>
> During the hdfs-native-client code move done as part of HDFS-9170, the cmake 
> minimum version was changed from 2.6 to 2.8. This JIRA will change the value 
> back to 2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955119#comment-14955119
 ] 

Brahma Reddy Battula commented on HDFS-8647:


[~mingma] rebased patch kindly review..Thanks..

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955002#comment-14955002
 ] 

Hudson commented on HDFS-9139:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8618 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8618/])
HDFS-9139. Enable parallel JUnit tests for HDFS Pre-commit (Contributed 
(vinayakumarb: rev 39581e3be2aaeb1eeb7fb98b6bdecd8d4e3c7269)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
* dev-support/test-patch.sh
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestSeveralNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/oauth2/TestClientCredentialTimeBasedTokenRefresher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java


> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-13 Thread Bogdan Raducanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bogdan Raducanu updated HDFS-9220:
--
Summary: Reading small file (< 512 bytes) that is open for append fails due 
to incorrect checksum  (was: ChecksumException after writing less than 512 
bytes)

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jagadesh Kiran N
> Attachments: test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955027#comment-14955027
 ] 

Hudson commented on HDFS-9160:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1254 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1254/])
HDFS-9160. [OIV-Doc] : Missing details of 'delimited' for processor 
(vinayakumarb: rev caa711b660ce73c0f6bf97e3499d157a3a2daaea)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955057#comment-14955057
 ] 

Hadoop QA commented on HDFS-9234:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  23m 18s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m  5s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 18s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 56s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 41s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 13s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 31s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 232m 28s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 31s | Tests passed in 
hadoop-hdfs-client. |
| | | 291m  1s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766299/HDFS-9234-001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5b6bae0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12950/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12950/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12950/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12950/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12950/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12950/console |


This message was automatically generated.

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-13 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8630:
-
Attachment: HDFS-8630.004.patch

Thanks [~vinayrpet] for review...
Attached updated patch..
Please review..

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, 
> HDFS-8630.003.patch, HDFS-8630.004.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9221) HdfsServerConstants#ReplicaState#getState should avoid calling values() since it creates a temporary array

2015-10-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955015#comment-14955015
 ] 

Kihwal Lee commented on HDFS-9221:
--

This is not a bug fix, but still a low risk performance improvement. What do 
others think about pulling this into 2.7.2?

> HdfsServerConstants#ReplicaState#getState should avoid calling values() since 
> it creates a temporary array
> --
>
> Key: HDFS-9221
> URL: https://issues.apache.org/jira/browse/HDFS-9221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Staffan Friberg
>Assignee: Staffan Friberg
> Fix For: 2.8.0
>
> Attachments: HADOOP-9221.001.patch
>
>
> When the BufferDecoder in BlockListAsLongs converts the stored value to a 
> ReplicaState enum it calls ReplicaState.getState(int) unfortunately this 
> method creates a ReplicaState[] for each call since it calls 
> ReplicaState.values().
> This patch creates a cached version of the values and thus avoid all 
> allocation when doing the conversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8898) Create API and command-line argument to get quota without need to get file and directory counts

2015-10-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955068#comment-14955068
 ] 

Kihwal Lee commented on HDFS-8898:
--

+1 to the approach.

> Create API and command-line argument to get quota without need to get file 
> and directory counts
> ---
>
> Key: HDFS-8898
> URL: https://issues.apache.org/jira/browse/HDFS-8898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Joep Rottinghuis
> Attachments: HDFS-8898.patch
>
>
> On large directory structures it takes significant time to iterate through 
> the file and directory counts recursively to get a complete ContentSummary.
> When you want to just check for the quota on a higher level directory it 
> would be good to have an option to skip the file and directory counts.
> Moreover, currently one can only check the quota if you have access to all 
> the directories underneath. For example, if I have a large home directory 
> under /user/joep and I host some files for another user in a sub-directory, 
> the moment they create an unreadable sub-directory under my home I can no 
> longer check what my quota is. Understood that I cannot check the current 
> file counts unless I can iterate through all the usage, but for 
> administrative purposes it is nice to be able to get the current quota 
> setting on a directory without the need to iterate through and run into 
> permission issues on sub-directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration

2015-10-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955081#comment-14955081
 ] 

Kihwal Lee commented on HDFS-8676:
--

+1

> Delayed rolling upgrade finalization can cause heartbeat expiration
> ---
>
> Key: HDFS-8676
> URL: https://issues.apache.org/jira/browse/HDFS-8676
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Walter Su
>Priority: Critical
> Attachments: HDFS-8676.01.patch, HDFS-8676.02.patch
>
>
> In big busy clusters where the deletion rate is also high, a lot of blocks 
> can pile up in the datanode trash directories until an upgrade is finalized.  
> When it is finally finalized, the deletion of trash is done in the service 
> actor thread's context synchronously.  This blocks the heartbeat and can 
> cause heartbeat expiration.  
> We have seen a namenode losing hundreds of nodes after a delayed upgrade 
> finalization.  The deletion of trash directories should be made asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955084#comment-14955084
 ] 

Hudson commented on HDFS-9139:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1255 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1255/])
HDFS-9139. Enable parallel JUnit tests for HDFS Pre-commit (Contributed 
(vinayakumarb: rev 39581e3be2aaeb1eeb7fb98b6bdecd8d4e3c7269)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/oauth2/TestClientCredentialTimeBasedTokenRefresher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeHttpServer.java
* dev-support/test-patch.sh
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestSeveralNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955135#comment-14955135
 ] 

Hudson commented on HDFS-8676:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #520 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/520/])
HDFS-8676. Delayed rolling upgrade finalization can cause heartbeat (kihwal: 
rev 5b43db47a313decccdcca8f45c5708aab46396df)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java


> Delayed rolling upgrade finalization can cause heartbeat expiration
> ---
>
> Key: HDFS-8676
> URL: https://issues.apache.org/jira/browse/HDFS-8676
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Walter Su
>Priority: Critical
> Attachments: HDFS-8676.01.patch, HDFS-8676.02.patch
>
>
> In big busy clusters where the deletion rate is also high, a lot of blocks 
> can pile up in the datanode trash directories until an upgrade is finalized.  
> When it is finally finalized, the deletion of trash is done in the service 
> actor thread's context synchronously.  This blocks the heartbeat and can 
> cause heartbeat expiration.  
> We have seen a namenode losing hundreds of nodes after a delayed upgrade 
> finalization.  The deletion of trash directories should be made asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7645) Rolling upgrade is restoring blocks from trash multiple times

2015-10-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955138#comment-14955138
 ] 

Kihwal Lee commented on HDFS-7645:
--

We should fix this in 2.7.2. That means pulling HDFS-8656 as well. Any 
objections?

> Rolling upgrade is restoring blocks from trash multiple times
> -
>
> Key: HDFS-7645
> URL: https://issues.apache.org/jira/browse/HDFS-7645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Nathan Roberts
>Assignee: Keisuke Ogiwara
> Fix For: 2.8.0
>
> Attachments: HDFS-7645.01.patch, HDFS-7645.02.patch, 
> HDFS-7645.03.patch, HDFS-7645.04.patch, HDFS-7645.05.patch, 
> HDFS-7645.06.patch, HDFS-7645.07.patch
>
>
> When performing an HDFS rolling upgrade, the trash directory is getting 
> restored twice when under normal circumstances it shouldn't need to be 
> restored at all. iiuc, the only time these blocks should be restored is if we 
> need to rollback a rolling upgrade. 
> On a busy cluster, this can cause significant and unnecessary block churn 
> both on the datanodes, and more importantly in the namenode.
> The two times this happens are:
> 1) restart of DN onto new software
> {code}
>   private void doTransition(DataNode datanode, StorageDirectory sd,
>   NamespaceInfo nsInfo, StartupOption startOpt) throws IOException {
> if (startOpt == StartupOption.ROLLBACK && sd.getPreviousDir().exists()) {
>   Preconditions.checkState(!getTrashRootDir(sd).exists(),
>   sd.getPreviousDir() + " and " + getTrashRootDir(sd) + " should not 
> " +
>   " both be present.");
>   doRollback(sd, nsInfo); // rollback if applicable
> } else {
>   // Restore all the files in the trash. The restored files are retained
>   // during rolling upgrade rollback. They are deleted during rolling
>   // upgrade downgrade.
>   int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd));
>   LOG.info("Restored " + restored + " block files from trash.");
> }
> {code}
> 2) When heartbeat response no longer indicates a rollingupgrade is in progress
> {code}
>   /**
>* Signal the current rolling upgrade status as indicated by the NN.
>* @param inProgress true if a rolling upgrade is in progress
>*/
>   void signalRollingUpgrade(boolean inProgress) throws IOException {
> String bpid = getBlockPoolId();
> if (inProgress) {
>   dn.getFSDataset().enableTrash(bpid);
>   dn.getFSDataset().setRollingUpgradeMarker(bpid);
> } else {
>   dn.getFSDataset().restoreTrash(bpid);
>   dn.getFSDataset().clearRollingUpgradeMarker(bpid);
> }
>   }
> {code}
> HDFS-6800 and HDFS-6981 were modifying this behavior making it not completely 
> clear whether this is somehow intentional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration and write failures

2015-10-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-8676:
-
Summary: Delayed rolling upgrade finalization can cause heartbeat 
expiration and write failures  (was: Delayed rolling upgrade finalization can 
cause heartbeat expiration)

> Delayed rolling upgrade finalization can cause heartbeat expiration and write 
> failures
> --
>
> Key: HDFS-8676
> URL: https://issues.apache.org/jira/browse/HDFS-8676
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Walter Su
>Priority: Critical
> Attachments: HDFS-8676.01.patch, HDFS-8676.02.patch
>
>
> In big busy clusters where the deletion rate is also high, a lot of blocks 
> can pile up in the datanode trash directories until an upgrade is finalized.  
> When it is finally finalized, the deletion of trash is done in the service 
> actor thread's context synchronously.  This blocks the heartbeat and can 
> cause heartbeat expiration.  
> We have seen a namenode losing hundreds of nodes after a delayed upgrade 
> finalization.  The deletion of trash directories should be made asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration

2015-10-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955157#comment-14955157
 ] 

Kihwal Lee commented on HDFS-8676:
--

We have come to know that this bug not only causes heartbeat expiration, but 
fails writes. Since the deletion is executed by the actor thread synchronously, 
incremental block reports are blocked while deletion is in progress. Flie 
closures or adding blocks fail, if deletion takes a long time.

> Delayed rolling upgrade finalization can cause heartbeat expiration
> ---
>
> Key: HDFS-8676
> URL: https://issues.apache.org/jira/browse/HDFS-8676
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Walter Su
>Priority: Critical
> Attachments: HDFS-8676.01.patch, HDFS-8676.02.patch
>
>
> In big busy clusters where the deletion rate is also high, a lot of blocks 
> can pile up in the datanode trash directories until an upgrade is finalized.  
> When it is finally finalized, the deletion of trash is done in the service 
> actor thread's context synchronously.  This blocks the heartbeat and can 
> cause heartbeat expiration.  
> We have seen a namenode losing hundreds of nodes after a delayed upgrade 
> finalization.  The deletion of trash directories should be made asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8630) WebHDFS : Support get/setStoragePolicy

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955176#comment-14955176
 ] 

Hadoop QA commented on HDFS-8630:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 51s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 53s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 20s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 33s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  9s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  49m 54s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 31s | Tests passed in 
hadoop-hdfs-client. |
| | | 101m 21s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766335/HDFS-8630.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 39581e3 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12951/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12951/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12951/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12951/console |


This message was automatically generated.

> WebHDFS : Support get/setStoragePolicy 
> ---
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, 
> HDFS-8630.003.patch, HDFS-8630.004.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7163) WebHdfsFileSystem should retry reads in a similar way as the open

2015-10-13 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-7163:
-
Summary: WebHdfsFileSystem should retry reads in a similar way as the open  
(was: port read retry logic from 0.23's WebHdfsFilesystem#WebHdfsInputStream to 
2.x)

> WebHdfsFileSystem should retry reads in a similar way as the open
> -
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2015-10-13 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955240#comment-14955240
 ] 

Haohui Mai commented on HDFS-9047:
--

bq. libwebhdfs fills a purpose that no other C library currently fills. It can 
be used without the same version of Hadoop jars on the system as the server 
code,...

Hi [~cmccabe], as HDFS-9170 has been committed, libhdfs should only depend on 
the client jar. Do you think now libhdfs can satisfy the above use case, so 
that it is possible to remove libwebhdfs?

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955260#comment-14955260
 ] 

Hudson commented on HDFS-9139:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #531 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/531/])
HDFS-9139. Enable parallel JUnit tests for HDFS Pre-commit (Contributed 
(vinayakumarb: rev 39581e3be2aaeb1eeb7fb98b6bdecd8d4e3c7269)
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestSeveralNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAAppend.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferTestCase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/oauth2/TestClientCredentialTimeBasedTokenRefresher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSWebHdfsFileContextMainOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeHttpServer.java
* dev-support/test-patch.sh
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java


> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration and write failures

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955259#comment-14955259
 ] 

Hudson commented on HDFS-8676:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #531 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/531/])
HDFS-8676. Delayed rolling upgrade finalization can cause heartbeat (kihwal: 
rev 5b43db47a313decccdcca8f45c5708aab46396df)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java


> Delayed rolling upgrade finalization can cause heartbeat expiration and write 
> failures
> --
>
> Key: HDFS-8676
> URL: https://issues.apache.org/jira/browse/HDFS-8676
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Walter Su
>Priority: Critical
> Attachments: HDFS-8676.01.patch, HDFS-8676.02.patch
>
>
> In big busy clusters where the deletion rate is also high, a lot of blocks 
> can pile up in the datanode trash directories until an upgrade is finalized.  
> When it is finally finalized, the deletion of trash is done in the service 
> actor thread's context synchronously.  This blocks the heartbeat and can 
> cause heartbeat expiration.  
> We have seen a namenode losing hundreds of nodes after a delayed upgrade 
> finalization.  The deletion of trash directories should be made asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9237) NPE at TestDataNodeVolumeFailureToleration#tearDown

2015-10-13 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-9237:
--

 Summary: NPE at TestDataNodeVolumeFailureToleration#tearDown
 Key: HDFS-9237
 URL: https://issues.apache.org/jira/browse/HDFS-9237
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


{noformat}
Stack Trace:
java.lang.NullPointerException: null
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration.tearDown(TestDataNodeVolumeFailureToleration.java:79)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9237) NPE at TestDataNodeVolumeFailureToleration#tearDown

2015-10-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955305#comment-14955305
 ] 

Brahma Reddy Battula commented on HDFS-9237:


Uploaded the patch.. kindly review..

> NPE at TestDataNodeVolumeFailureToleration#tearDown
> ---
>
> Key: HDFS-9237
> URL: https://issues.apache.org/jira/browse/HDFS-9237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9237.patch
>
>
> {noformat}
> Stack Trace:
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration.tearDown(TestDataNodeVolumeFailureToleration.java:79)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8855) Webhdfs client leaks active NameNode connections

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955330#comment-14955330
 ] 

Hudson commented on HDFS-8855:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2467 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2467/])
HDFS-8855. Webhdfs client leaks active NameNode connections. Contributed 
(jitendra: rev 84cbd72afda6344e220526fac5c560f00f84e374)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestDataNodeUGIProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/DataNodeUGIProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> Webhdfs client leaks active NameNode connections
> 
>
> Key: HDFS-8855
> URL: https://issues.apache.org/jira/browse/HDFS-8855
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Bob Hansen
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HDFS-8855.005.patch, HDFS-8855.006.patch, 
> HDFS-8855.007.patch, HDFS-8855.1.patch, HDFS-8855.2.patch, HDFS-8855.3.patch, 
> HDFS-8855.4.patch, HDFS_8855.prototype.patch
>
>
> The attached script simulates a process opening ~50 files via webhdfs and 
> performing random reads.  Note that there are at most 50 concurrent reads, 
> and all webhdfs sessions are kept open.  Each read is ~64k at a random 
> position.  
> The script periodically (once per second) shells into the NameNode and 
> produces a summary of the socket states.  For my test cluster with 5 nodes, 
> it took ~30 seconds for the NameNode to have ~25000 active connections and 
> fails.
> It appears that each request to the webhdfs client is opening a new 
> connection to the NameNode and keeping it open after the request is complete. 
>  If the process continues to run, eventually (~30-60 seconds), all of the 
> open connections are closed and the NameNode recovers.  
> This smells like SoftReference reaping.  Are we using SoftReferences in the 
> webhdfs client to cache NameNode connections but never re-using them?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8855) Webhdfs client leaks active NameNode connections

2015-10-13 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-8855:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Webhdfs client leaks active NameNode connections
> 
>
> Key: HDFS-8855
> URL: https://issues.apache.org/jira/browse/HDFS-8855
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Bob Hansen
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HDFS-8855.005.patch, HDFS-8855.006.patch, 
> HDFS-8855.007.patch, HDFS-8855.1.patch, HDFS-8855.2.patch, HDFS-8855.3.patch, 
> HDFS-8855.4.patch, HDFS_8855.prototype.patch
>
>
> The attached script simulates a process opening ~50 files via webhdfs and 
> performing random reads.  Note that there are at most 50 concurrent reads, 
> and all webhdfs sessions are kept open.  Each read is ~64k at a random 
> position.  
> The script periodically (once per second) shells into the NameNode and 
> produces a summary of the socket states.  For my test cluster with 5 nodes, 
> it took ~30 seconds for the NameNode to have ~25000 active connections and 
> fails.
> It appears that each request to the webhdfs client is opening a new 
> connection to the NameNode and keeping it open after the request is complete. 
>  If the process continues to run, eventually (~30-60 seconds), all of the 
> open connections are closed and the NameNode recovers.  
> This smells like SoftReference reaping.  Are we using SoftReferences in the 
> webhdfs client to cache NameNode connections but never re-using them?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8766) Implement a libhdfs(3) compatible API

2015-10-13 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955252#comment-14955252
 ] 

Haohui Mai commented on HDFS-8766:
--

I'd like to proceed with this patch after HDFS-9207. The reason is that the 
patch contains both integration tests and the hdfs.h file which duplicates the 
existing code in libhdfs. It makes sense to go with HDFS-9207 first and 
minimize this patch.

> Implement a libhdfs(3) compatible API
> -
>
> Key: HDFS-8766
> URL: https://issues.apache.org/jira/browse/HDFS-8766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8766.HDFS-8707.000.patch, 
> HDFS-8766.HDFS-8707.001.patch, HDFS-8766.HDFS-8707.002.patch, 
> HDFS-8766.HDFS-8707.003.patch, HDFS-8766.HDFS-8707.004.patch, 
> HDFS-8766.HDFS-8707.005.patch
>
>
> Add a synchronous API that is compatible with the hdfs.h header used in 
> libhdfs and libhdfs3.  This will make it possible for projects using 
> libhdfs/libhdfs3 to relink against libhdfspp with minimal changes.
> This also provides a pure C interface that can be linked against projects 
> that aren't built in C++11 mode for various reasons but use the same 
> compiler.  It also allows many other programming languages to access 
> libhdfspp through builtin FFI interfaces.
> The libhdfs API is very similar to the posix file API which makes it easier 
> for programs built using posix filesystem calls to be modified to access HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9160) [OIV-Doc] : Missing details of "delimited" for processor options

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955283#comment-14955283
 ] 

Hudson commented on HDFS-9160:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #489 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/489/])
HDFS-9160. [OIV-Doc] : Missing details of 'delimited' for processor 
(vinayakumarb: rev caa711b660ce73c0f6bf97e3499d157a3a2daaea)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> [OIV-Doc] : Missing details of "delimited" for processor options
> 
>
> Key: HDFS-9160
> URL: https://issues.apache.org/jira/browse/HDFS-9160
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9160.01.patch
>
>
> Missing details of "delimited" for processor options
> {noformat}
> -p|--processor processor  Specify the image processor to apply against 
> the image file. Currently valid options are Web (default), XML and 
> FileDistribution.
> {noformat}
> Add demited options here and explain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9237) NPE at TestDataNodeVolumeFailureToleration#tearDown

2015-10-13 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9237:
---
Attachment: HDFS-9237.patch

> NPE at TestDataNodeVolumeFailureToleration#tearDown
> ---
>
> Key: HDFS-9237
> URL: https://issues.apache.org/jira/browse/HDFS-9237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9237.patch
>
>
> {noformat}
> Stack Trace:
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration.tearDown(TestDataNodeVolumeFailureToleration.java:79)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955291#comment-14955291
 ] 

Hadoop QA commented on HDFS-9157:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 31s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 29s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 35s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 48s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 35s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  54m 12s | Tests failed in hadoop-hdfs. |
| | | 105m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766345/HDFS-9157_4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5b43db4 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12952/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12952/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12952/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12952/console |


This message was automatically generated.

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration and write failures

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955197#comment-14955197
 ] 

Hudson commented on HDFS-8676:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2466 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2466/])
HDFS-8676. Delayed rolling upgrade finalization can cause heartbeat (kihwal: 
rev 5b43db47a313decccdcca8f45c5708aab46396df)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Delayed rolling upgrade finalization can cause heartbeat expiration and write 
> failures
> --
>
> Key: HDFS-8676
> URL: https://issues.apache.org/jira/browse/HDFS-8676
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Walter Su
>Priority: Critical
> Attachments: HDFS-8676.01.patch, HDFS-8676.02.patch
>
>
> In big busy clusters where the deletion rate is also high, a lot of blocks 
> can pile up in the datanode trash directories until an upgrade is finalized.  
> When it is finally finalized, the deletion of trash is done in the service 
> actor thread's context synchronously.  This blocks the heartbeat and can 
> cause heartbeat expiration.  
> We have seen a namenode losing hundreds of nodes after a delayed upgrade 
> finalization.  The deletion of trash directories should be made asynchronous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9235) hdfs-native-client build getting errors when built with cmake 2.6

2015-10-13 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955230#comment-14955230
 ] 

Haohui Mai commented on HDFS-9235:
--

+1 pending jenkins.

> hdfs-native-client build getting errors when built with cmake 2.6
> -
>
> Key: HDFS-9235
> URL: https://issues.apache.org/jira/browse/HDFS-9235
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: HDFS-9235.001.patch
>
>
> During the hdfs-native-client code move done as part of HDFS-9170, the cmake 
> minimum version was changed from 2.6 to 2.8. This JIRA will change the value 
> back to 2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8855) Webhdfs client leaks active NameNode connections

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955270#comment-14955270
 ] 

Hudson commented on HDFS-8855:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #521 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/521/])
HDFS-8855. Webhdfs client leaks active NameNode connections. Contributed 
(jitendra: rev 84cbd72afda6344e220526fac5c560f00f84e374)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestDataNodeUGIProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/DataNodeUGIProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Webhdfs client leaks active NameNode connections
> 
>
> Key: HDFS-8855
> URL: https://issues.apache.org/jira/browse/HDFS-8855
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Bob Hansen
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HDFS-8855.005.patch, HDFS-8855.006.patch, 
> HDFS-8855.007.patch, HDFS-8855.1.patch, HDFS-8855.2.patch, HDFS-8855.3.patch, 
> HDFS-8855.4.patch, HDFS_8855.prototype.patch
>
>
> The attached script simulates a process opening ~50 files via webhdfs and 
> performing random reads.  Note that there are at most 50 concurrent reads, 
> and all webhdfs sessions are kept open.  Each read is ~64k at a random 
> position.  
> The script periodically (once per second) shells into the NameNode and 
> produces a summary of the socket states.  For my test cluster with 5 nodes, 
> it took ~30 seconds for the NameNode to have ~25000 active connections and 
> fails.
> It appears that each request to the webhdfs client is opening a new 
> connection to the NameNode and keeping it open after the request is complete. 
>  If the process continues to run, eventually (~30-60 seconds), all of the 
> open connections are closed and the NameNode recovers.  
> This smells like SoftReference reaping.  Are we using SoftReferences in the 
> webhdfs client to cache NameNode connections but never re-using them?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9237) NPE at TestDataNodeVolumeFailureToleration#tearDown

2015-10-13 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9237:
---
Status: Patch Available  (was: Open)

> NPE at TestDataNodeVolumeFailureToleration#tearDown
> ---
>
> Key: HDFS-9237
> URL: https://issues.apache.org/jira/browse/HDFS-9237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9237.patch
>
>
> {noformat}
> Stack Trace:
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration.tearDown(TestDataNodeVolumeFailureToleration.java:79)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9236) Add sanity check for block size during block recovery

2015-10-13 Thread Tony Wu (JIRA)
Tony Wu created HDFS-9236:
-

 Summary: Add sanity check for block size during block recovery
 Key: HDFS-9236
 URL: https://issues.apache.org/jira/browse/HDFS-9236
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Tony Wu
Assignee: Tony Wu


Ran into an issue while running test against faulty data-node code. 

Currently in DataNode.java:
{code:java}
  /** Block synchronization */
  void syncBlock(RecoveringBlock rBlock,
 List syncList) throws IOException {
…

// Calculate the best available replica state.
ReplicaState bestState = ReplicaState.RWR;
…

// Calculate list of nodes that will participate in the recovery
// and the new block size
List participatingList = new ArrayList();
final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
-1, recoveryId);
switch(bestState) {
…
case RBW:
case RWR:
  long minLength = Long.MAX_VALUE;
  for(BlockRecord r : syncList) {
ReplicaState rState = r.rInfo.getOriginalReplicaState();
if(rState == bestState) {
  minLength = Math.min(minLength, r.rInfo.getNumBytes());
  participatingList.add(r);
}
  }
  newBlock.setNumBytes(minLength);
  break;
…
}
…
nn.commitBlockSynchronization(block,
newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
datanodes, storages);
  }
{code}

This code is called by the DN coordinating the block recovery. In the above 
case, it is possible for none of the rState (reported by DNs with copies of the 
replica being recovered) to match the bestState. This can either be caused by 
faulty DN code or stale/modified/corrupted files on DN. When this happens the 
DN will end up reporting the minLengh of Long.MAX_VALUE.

Unfortunately there is no check on the NN for replica length. See 
FSNamesystem.java:
{code:java}
  void commitBlockSynchronization(ExtendedBlock oldBlock,
  long newgenerationstamp, long newlength,
  boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
  String[] newtargetstorages) throws IOException {
…

  if (deleteblock) {
Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
boolean remove = iFile.removeLastBlock(blockToDel) != null;
if (remove) {
  blockManager.removeBlock(storedBlock);
}
  } else {
// update last block
if(!copyTruncate) {
  storedBlock.setGenerationStamp(newgenerationstamp);
  
  // XXX block length is updated without any check <<<

[jira] [Updated] (HDFS-9236) Missing sanity check for block size during block recovery

2015-10-13 Thread Tony Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Wu updated HDFS-9236:
--
Summary: Missing sanity check for block size during block recovery  (was: 
Add sanity check for block size during block recovery)

> Missing sanity check for block size during block recovery
> -
>
> Key: HDFS-9236
> URL: https://issues.apache.org/jira/browse/HDFS-9236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>
> Ran into an issue while running test against faulty data-node code. 
> Currently in DataNode.java:
> {code:java}
>   /** Block synchronization */
>   void syncBlock(RecoveringBlock rBlock,
>  List syncList) throws IOException {
> …
> // Calculate the best available replica state.
> ReplicaState bestState = ReplicaState.RWR;
> …
> // Calculate list of nodes that will participate in the recovery
> // and the new block size
> List participatingList = new ArrayList();
> final ExtendedBlock newBlock = new ExtendedBlock(bpid, blockId,
> -1, recoveryId);
> switch(bestState) {
> …
> case RBW:
> case RWR:
>   long minLength = Long.MAX_VALUE;
>   for(BlockRecord r : syncList) {
> ReplicaState rState = r.rInfo.getOriginalReplicaState();
> if(rState == bestState) {
>   minLength = Math.min(minLength, r.rInfo.getNumBytes());
>   participatingList.add(r);
> }
>   }
>   newBlock.setNumBytes(minLength);
>   break;
> …
> }
> …
> nn.commitBlockSynchronization(block,
> newBlock.getGenerationStamp(), newBlock.getNumBytes(), true, false,
> datanodes, storages);
>   }
> {code}
> This code is called by the DN coordinating the block recovery. In the above 
> case, it is possible for none of the rState (reported by DNs with copies of 
> the replica being recovered) to match the bestState. This can either be 
> caused by faulty DN code or stale/modified/corrupted files on DN. When this 
> happens the DN will end up reporting the minLengh of Long.MAX_VALUE.
> Unfortunately there is no check on the NN for replica length. See 
> FSNamesystem.java:
> {code:java}
>   void commitBlockSynchronization(ExtendedBlock oldBlock,
>   long newgenerationstamp, long newlength,
>   boolean closeFile, boolean deleteblock, DatanodeID[] newtargets,
>   String[] newtargetstorages) throws IOException {
> …
>   if (deleteblock) {
> Block blockToDel = ExtendedBlock.getLocalBlock(oldBlock);
> boolean remove = iFile.removeLastBlock(blockToDel) != null;
> if (remove) {
>   blockManager.removeBlock(storedBlock);
> }
>   } else {
> // update last block
> if(!copyTruncate) {
>   storedBlock.setGenerationStamp(newgenerationstamp);
>   
>   // XXX block length is updated without any check <<<   storedBlock.setNumBytes(newlength);
> }
> …
> if (closeFile) {
>   LOG.info("commitBlockSynchronization(oldBlock=" + oldBlock
>   + ", file=" + src
>   + (copyTruncate ? ", newBlock=" + truncatedBlock
>   : ", newgenerationstamp=" + newgenerationstamp)
>   + ", newlength=" + newlength
>   + ", newtargets=" + Arrays.asList(newtargets) + ") successful");
> } else {
>   LOG.info("commitBlockSynchronization(" + oldBlock + ") successful");
> }
>   }
> {code}
> After this point the block length becomes Long.MAX_VALUE. Any subsequent 
> block report (even with correct length) will cause the block to be marked as 
> corrupted. Since this is block could be the last block of the file. If this 
> happens and the client goes away, NN won’t be able to recover the lease and 
> close the file because the last block is under-replicated.
> I believe we need to have a sanity check for block size on both DN and NN to 
> prevent such case from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9139) Enable parallel JUnit tests for HDFS Pre-commit

2015-10-13 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955250#comment-14955250
 ] 

Chris Nauroth commented on HDFS-9139:
-

[~vinayrpet], thank you for finishing this!

> Enable parallel JUnit tests for HDFS Pre-commit 
> 
>
> Key: HDFS-9139
> URL: https://issues.apache.org/jira/browse/HDFS-9139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-9139.01.patch, HDFS-9139.02.patch, 
> HDFS-9139.03.patch, HDFS-9139.04.patch
>
>
> Forked from HADOOP-11984, 
> With the initial and significant work from [~cnauroth], this Jira is to track 
> and support parallel tests' run for HDFS Precommit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8855) Webhdfs client leaks active NameNode connections

2015-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14955262#comment-14955262
 ] 

Hudson commented on HDFS-8855:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8620 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8620/])
HDFS-8855. Webhdfs client leaks active NameNode connections. Contributed 
(jitendra: rev 84cbd72afda6344e220526fac5c560f00f84e374)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/DataNodeUGIProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestDataNodeUGIProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java


> Webhdfs client leaks active NameNode connections
> 
>
> Key: HDFS-8855
> URL: https://issues.apache.org/jira/browse/HDFS-8855
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Bob Hansen
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HDFS-8855.005.patch, HDFS-8855.006.patch, 
> HDFS-8855.007.patch, HDFS-8855.1.patch, HDFS-8855.2.patch, HDFS-8855.3.patch, 
> HDFS-8855.4.patch, HDFS_8855.prototype.patch
>
>
> The attached script simulates a process opening ~50 files via webhdfs and 
> performing random reads.  Note that there are at most 50 concurrent reads, 
> and all webhdfs sessions are kept open.  Each read is ~64k at a random 
> position.  
> The script periodically (once per second) shells into the NameNode and 
> produces a summary of the socket states.  For my test cluster with 5 nodes, 
> it took ~30 seconds for the NameNode to have ~25000 active connections and 
> fails.
> It appears that each request to the webhdfs client is opening a new 
> connection to the NameNode and keeping it open after the request is complete. 
>  If the process continues to run, eventually (~30-60 seconds), all of the 
> open connections are closed and the NameNode recovers.  
> This smells like SoftReference reaping.  Are we using SoftReferences in the 
> webhdfs client to cache NameNode connections but never re-using them?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >