[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958429#comment-14958429
 ] 

Brahma Reddy Battula commented on HDFS-8647:


{{Test failures}} are unrelated,Ran all the impacted testcases all those are 
Passing.. Kindly review.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958423#comment-14958423
 ] 

Mingliang Liu commented on HDFS-4015:
-

The patch looks good to me overall. More discussion is welcome about safe mode 
question.

The latest release audit warning is unrelated. Findbugs warnings may be caused 
by existing one tracked by [HDFS-9242]. Checkstyle warnings are caused by 
existing code, and may be addressed separately. Failing unit tests seem 
unrelated but we may need double check.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9247) Add an Apache license header to DatanodeStats.java

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9247:

Description: Addendum commit is enough.   (was: This jira tracks effort of 
adding an Apache license header to {{DatanodeStats.java}}.)

> Add an Apache license header to DatanodeStats.java
> --
>
> Key: HDFS-9247
> URL: https://issues.apache.org/jira/browse/HDFS-9247
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
>
> Addendum commit is enough. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9247) Add an Apache license header to DatanodeStats.java

2015-10-14 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-9247:
---

 Summary: Add an Apache license header to DatanodeStats.java
 Key: HDFS-9247
 URL: https://issues.apache.org/jira/browse/HDFS-9247
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Mingliang Liu
Assignee: Mingliang Liu
Priority: Minor


This jira tracks effort of adding an Apache license header to 
{{DatanodeStats.java}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2015-10-14 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8411:

Attachment: HDFS-8411-002.patch

 When some datanodes are corrupted, all their blocks are to be reconstructed by 
other healthy datanodes. The network flow incurred is very high and maybe we 
want to track it. We can record the bytes read and written by any datanode. In 
fact, I think HDFS-8529(block counts) and HDFS-8410(time consumed) are not 
necessary. We can estimate the time cost according to the bytes read and write. 
Block count metric is not very meaningful when there’re a lot of small files. 
We can adjust the metrics for the future requirement.

> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9092) Nfs silently drops overlapping write requests and causes data copying to fail

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958404#comment-14958404
 ] 

Mingliang Liu commented on HDFS-9092:
-

Thank you [~yzhangal] for your quick reply. You're right. The Jenkins report 
shows 0 findbugs. I can reproduce the findbugs warnings locally and saw the 
pre-patch in the QA report.

> Nfs silently drops overlapping write requests and causes data copying to fail
> -
>
> Key: HDFS-9092
> URL: https://issues.apache.org/jira/browse/HDFS-9092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-9092.001.patch, HDFS-9092.002.patch
>
>
> When NOT using 'sync' option, the NFS writes may issue the following warning:
> org.apache.hadoop.hdfs.nfs.nfs3.OpenFileCtx: Got an overlapping write 
> (1248751616, 1249677312), nextOffset=1248752400. Silently drop it now
> and the size of data copied via NFS will stay at 1248752400.
> Found what happened is:
> 1. The write requests from client are sent asynchronously. 
> 2. The NFS gateway has handler to handle the incoming requests by creating an 
> internal write request structuire and put it into cache;
> 3. In parallel, a separate thread in NFS gateway takes requests out from the 
> cache and writes the data to HDFS.
> The current offset is how much data has been written by the write thread in 
> 3. The detection of overlapping write request happens in 2, but it only 
> checks the write request against the curent offset, and trim the request if 
> necessary. Because the write requests are sent asynchronously, if two 
> requests are beyond the current offset, and they overlap, it's not detected 
> and both are put into the cache. This cause the symptom reported in this case 
> at step 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958368#comment-14958368
 ] 

Hudson commented on HDFS-9188:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1269 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1269/])
HDFS-9188. Make block corruption related tests FsDataset-agnostic. (lei) (lei: 
rev c80b3a804f5222f95a266f84424af9cb9c229483)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestScrLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetImplTestUtilsFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java


> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch, HDFS-9188.006.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958367#comment-14958367
 ] 

Hudson commented on HDFS-9188:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2482 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2482/])
HDFS-9188. Make block corruption related tests FsDataset-agnostic. (lei) (lei: 
rev c80b3a804f5222f95a266f84424af9cb9c229483)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetImplTestUtilsFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestScrLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java


> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch, HDFS-9188.006.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958363#comment-14958363
 ] 

Hudson commented on HDFS-9223:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #498 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/498/])
HDFS-9223. Code cleanup for DatanodeDescriptor and HeartbeatManager. (jing9: 
rev be7a0add8b6561d3c566237cc0370b06e7f32bb4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java


> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9184) Logging HDFS operation's caller context into audit logs

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958361#comment-14958361
 ] 

Hadoop QA commented on HDFS-9184:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  30m 43s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |  12m 14s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  15m 47s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 30s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 54s | The applied patch generated  3 
new checkstyle issues (total was 226, now 228). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 52s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   6m 55s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   9m 21s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  66m 33s | Tests failed in hadoop-hdfs. |
| | | 148m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.TestLocalFsFCStatistics |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.metrics2.impl.TestMetricsSystemImpl |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.test.TestTimedOutTestsListener |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestEncryptionZones |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs |
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766693/HDFS-9184.006.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / be7a0ad |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12997/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12997/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12997/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12997/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12997/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12997/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12997/console |


This message was automatically generated.

> Logging HDFS operation's caller context into audit logs
> ---
>
> Key: HDFS-9184
> URL: https://issues.apache.org/jira/browse/HDFS-9184
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9184.000.patch, HDFS-9184.001.patch, 
> HDFS-9184.002.patch, HDFS-9184.003.patch, HDFS-9184.004.patch, 
> HDFS-9184.005.patch, HDFS-9184.006.patch
>
>
> For a given HDFS operation (e.g. delete file), it's very helpful to track 
> which upper level job issues it. The upper level callers may be specific 
> Oozie tasks, MR jobs, and hive queries. One scenario is that the namenode 
> (NN) is abused/spammed, the operator may want to know immediately which MR 
> job should be blamed so that she can kill it. To this end, the caller context 
> contains at least the application-dependent "tracking id".
> There are several existing techniques that may be related to this problem.
> 1. Currently the HDFS audit log tracks the users of the the operation which 
> is obviously not enough. It's common that the same user issues multiple jobs 
> at the same time. Even for a single top

[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-14 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958353#comment-14958353
 ] 

Masatake Iwasaki commented on HDFS-9226:


bq. Every other changes in hdfs tests need not verify for transitive 
dependencies.

Not for other changes in tests but only Mini(DFS|YARN)Cluster is desirable to 
work by just adding hadoop-minicluster as dependency. I filed HADOOP-12477 
addressing this.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAcc

[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958335#comment-14958335
 ] 

Hudson commented on HDFS-9188:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #546 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/546/])
HDFS-9188. Make block corruption related tests FsDataset-agnostic. (lei) (lei: 
rev c80b3a804f5222f95a266f84424af9cb9c229483)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestScrLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetImplTestUtilsFactory.java


> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch, HDFS-9188.006.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958323#comment-14958323
 ] 

Hudson commented on HDFS-9223:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2435 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2435/])
HDFS-9223. Code cleanup for DatanodeDescriptor and HeartbeatManager. (jing9: 
rev be7a0add8b6561d3c566237cc0370b06e7f32bb4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java


> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958322#comment-14958322
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2435 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2435/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9044) Give Priority to FavouredNodes , before selecting nodes from FavouredNode's Node Group

2015-10-14 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-9044:
-
Attachment: HDFS-9044.5.patch

Thanks [~vinayrpet] for the review comments. 
Have updated the patch. Please review.

> Give Priority to FavouredNodes , before selecting nodes from FavouredNode's 
> Node Group
> --
>
> Key: HDFS-9044
> URL: https://issues.apache.org/jira/browse/HDFS-9044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
> Attachments: HDFS-9044.1.patch, HDFS-9044.2.patch, HDFS-9044.3.patch, 
> HDFS-9044.4.patch, HDFS-9044.5.patch
>
>
> Passing Favored nodes intention is to place replica among the favored node
> Current behavior in Node group is 
>   If favored node is not available it goes to one among favored 
> nodegroup. 
> {noformat}
> Say for example:
>   1)I need 3 replicas and passed 5 favored nodes.
>   2)Out of 5 favored nodes 3 favored nodes are not good.
>   3)Then based on BlockPlacementPolicyWithNodeGroup out of 5 targets node 
> returned , 3 will be random node from 3 bad FavoredNode's nodegroup. 
>   4)Then there is a probability that all my 3 replicas are placed on 
> Random node from FavoredNodes's nodegroup , instead of giving priority to 2 
> favored nodes returned as target.
> {noformat}
> *Instead of returning 5 targets on 3rd step above , we can return 2 good 
> favored nodes as target*
> *And remaining 1 needed replica can be chosen from Random node of bad 
> FavoredNodes's nodegroup.*
> This will make sure that the FavoredNodes are given priority.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9205:
--
Attachment: h9205_20151015.patch

h9205_20151015.patch: updates with trunk.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9092) Nfs silently drops overlapping write requests and causes data copying to fail

2015-10-14 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958278#comment-14958278
 ] 

Yongjun Zhang commented on HDFS-9092:
-

Hi [~liuml07],

Per

https://builds.apache.org/job/PreCommit-HDFS-Build/12719/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-nfs.html

It shows 0 findbugs.

Guess you meant
Pre-patch Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/12719/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html

Thanks.


> Nfs silently drops overlapping write requests and causes data copying to fail
> -
>
> Key: HDFS-9092
> URL: https://issues.apache.org/jira/browse/HDFS-9092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-9092.001.patch, HDFS-9092.002.patch
>
>
> When NOT using 'sync' option, the NFS writes may issue the following warning:
> org.apache.hadoop.hdfs.nfs.nfs3.OpenFileCtx: Got an overlapping write 
> (1248751616, 1249677312), nextOffset=1248752400. Silently drop it now
> and the size of data copied via NFS will stay at 1248752400.
> Found what happened is:
> 1. The write requests from client are sent asynchronously. 
> 2. The NFS gateway has handler to handle the incoming requests by creating an 
> internal write request structuire and put it into cache;
> 3. In parallel, a separate thread in NFS gateway takes requests out from the 
> cache and writes the data to HDFS.
> The current offset is how much data has been written by the write thread in 
> 3. The detection of overlapping write request happens in 2, but it only 
> checks the write request against the curent offset, and trim the request if 
> necessary. Because the write requests are sent asynchronously, if two 
> requests are beyond the current offset, and they overlap, it's not detected 
> and both are put into the cache. This cause the symptom reported in this case 
> at step 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958271#comment-14958271
 ] 

Hadoop QA commented on HDFS-4015:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  26m 48s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   9m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  12m 26s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 21s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | site |   3m 33s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 38s | The applied patch generated  3 
new checkstyle issues (total was 138, now 138). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 55s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 20s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 43s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  65m 50s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 42s | Tests passed in 
hadoop-hdfs-client. |
| | | 133m 58s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766677/HDFS-4015.005.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / be7a0ad |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12996/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12996/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12996/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12996/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12996/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12996/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12996/console |


This message was automatically generated.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 

[jira] [Commented] (HDFS-6589) TestDistributedFileSystem.testAllWithNoXmlDefaults failed intermittently

2015-10-14 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958259#comment-14958259
 ] 

Yongjun Zhang commented on HDFS-6589:
-

Thanks much  for looking into [~jojochuang].


> TestDistributedFileSystem.testAllWithNoXmlDefaults failed intermittently
> 
>
> Key: HDFS-6589
> URL: https://issues.apache.org/jira/browse/HDFS-6589
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Wei-Chiu Chuang
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/7207 is clean
> https://builds.apache.org/job/PreCommit-HDFS-Build/7208 has the following 
> failure. The code is essentially the same.
> Running the same test locally doesn't reproduce. A flaky test there.
> {code}
> Stacktrace
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:263)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testAllWithNoXmlDefaults(TestDistributedFileSystem.java:651)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958249#comment-14958249
 ] 

Hudson commented on HDFS-9223:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2481 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2481/])
HDFS-9223. Code cleanup for DatanodeDescriptor and HeartbeatManager. (jing9: 
rev be7a0add8b6561d3c566237cc0370b06e7f32bb4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockUnderConstructionFeature.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java


> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958248#comment-14958248
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2481 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2481/])
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958243#comment-14958243
 ] 

Hudson commented on HDFS-9188:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8639 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8639/])
HDFS-9188. Make block corruption related tests FsDataset-agnostic. (lei) (lei: 
rev c80b3a804f5222f95a266f84424af9cb9c229483)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetImplTestUtilsFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestScrLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestProcessCorruptBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend3.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImplTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/FsDatasetTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch, HDFS-9188.006.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9188:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks a lot for the reviews, [~cmccabe].

Committed to trunk and branch-2.

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch, HDFS-9188.006.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958239#comment-14958239
 ] 

Mingliang Liu commented on HDFS-4015:
-

Thanks for the confirmation (and for the patch). I think the {{leaveSafeMode}} 
depends on  {{isInStartupSafeMode()}} which is false in this case.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958238#comment-14958238
 ] 

Mingliang Liu commented on HDFS-4015:
-

Thanks for the confirmation (and for the patch). I think the {{leaveSafeMode}} 
depends on  {{isInStartupSafeMode()}} which is false in this case.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958235#comment-14958235
 ] 

Hudson commented on HDFS-9223:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #532 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/532/])
HDFS-9223. Code cleanup for DatanodeDescriptor and HeartbeatManager. (jing9: 
rev be7a0add8b6561d3c566237cc0370b06e7f32bb4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockUnderConstructionFeature.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java


> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9246) TestGlobPaths#pTestCurlyBracket is failing

2015-10-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958232#comment-14958232
 ] 

Allen Wittenauer commented on HDFS-9246:


Meh, actually, it looks like we just need to swap out the exception thrown.  
I'm guessing they don't match or something.

> TestGlobPaths#pTestCurlyBracket is failing
> --
>
> Key: HDFS-9246
> URL: https://issues.apache.org/jira/browse/HDFS-9246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> com.google.re2j.PatternSyntaxException: error parsing regexp: Unclosed group 
> at pos 10: `myuser}{bc`
>   at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:168)
>   at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:154)
>   at org.apache.hadoop.fs.GlobPattern.(GlobPattern.java:42)
>   at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:67)
>   at org.apache.hadoop.fs.GlobFilter.(GlobFilter.java:50)
>   at org.apache.hadoop.fs.Globber.doGlob(Globber.java:209)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1664)
>   at 
> org.apache.hadoop.fs.TestGlobPaths.prepareTesting(TestGlobPaths.java:758)
>   at 
> org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket(TestGlobPaths.java:724)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958228#comment-14958228
 ] 

Anu Engineer commented on HDFS-4015:


bq.When the operator makes the name node leave safe mode manually, the -force 
option is not checked, even if there are orphaned blocks. Is this possible? If 
true, is it expected?

If there are orphaned blocks that we discovered during Startup safe mode, 
operator cannot exit without -forceExit. 


> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9246) TestGlobPaths#pTestCurlyBracket is failing

2015-10-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958223#comment-14958223
 ] 

Allen Wittenauer commented on HDFS-9246:


The exception message changed with us swapping out the broken regex handler.  
So this is an extremely simple patch.  (I'm not going to ask why HDFS is 
running effectively the same tests as common ... )

> TestGlobPaths#pTestCurlyBracket is failing
> --
>
> Key: HDFS-9246
> URL: https://issues.apache.org/jira/browse/HDFS-9246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> com.google.re2j.PatternSyntaxException: error parsing regexp: Unclosed group 
> at pos 10: `myuser}{bc`
>   at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:168)
>   at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:154)
>   at org.apache.hadoop.fs.GlobPattern.(GlobPattern.java:42)
>   at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:67)
>   at org.apache.hadoop.fs.GlobFilter.(GlobFilter.java:50)
>   at org.apache.hadoop.fs.Globber.doGlob(Globber.java:209)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1664)
>   at 
> org.apache.hadoop.fs.TestGlobPaths.prepareTesting(TestGlobPaths.java:758)
>   at 
> org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket(TestGlobPaths.java:724)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958204#comment-14958204
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #497 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/497/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958203#comment-14958203
 ] 

Hudson commented on HDFS-9238:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #497 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/497/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958173#comment-14958173
 ] 

Hudson commented on HDFS-9223:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #545 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/545/])
HDFS-9223. Code cleanup for DatanodeDescriptor and HeartbeatManager. (jing9: 
rev be7a0add8b6561d3c566237cc0370b06e7f32bb4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockUnderConstructionFeature.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8766) Implement a libhdfs(3) compatible API

2015-10-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958164#comment-14958164
 ] 

Haohui Mai commented on HDFS-8766:
--

As far as I can see there are multiple issues with the current patch -- I don't 
think it's ready yet.

I'll give more detailed comment once it's rebased on top of HDFS-9207. 
Assigning it back to James.

> Implement a libhdfs(3) compatible API
> -
>
> Key: HDFS-8766
> URL: https://issues.apache.org/jira/browse/HDFS-8766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Haohui Mai
> Attachments: HDFS-8766.HDFS-8707.000.patch, 
> HDFS-8766.HDFS-8707.001.patch, HDFS-8766.HDFS-8707.002.patch, 
> HDFS-8766.HDFS-8707.003.patch, HDFS-8766.HDFS-8707.004.patch, 
> HDFS-8766.HDFS-8707.005.patch
>
>
> Add a synchronous API that is compatible with the hdfs.h header used in 
> libhdfs and libhdfs3.  This will make it possible for projects using 
> libhdfs/libhdfs3 to relink against libhdfspp with minimal changes.
> This also provides a pure C interface that can be linked against projects 
> that aren't built in C++11 mode for various reasons but use the same 
> compiler.  It also allows many other programming languages to access 
> libhdfspp through builtin FFI interfaces.
> The libhdfs API is very similar to the posix file API which makes it easier 
> for programs built using posix filesystem calls to be modified to access HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8766) Implement a libhdfs(3) compatible API

2015-10-14 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8766:
-
Assignee: James Clampffer  (was: Haohui Mai)

> Implement a libhdfs(3) compatible API
> -
>
> Key: HDFS-8766
> URL: https://issues.apache.org/jira/browse/HDFS-8766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-8766.HDFS-8707.000.patch, 
> HDFS-8766.HDFS-8707.001.patch, HDFS-8766.HDFS-8707.002.patch, 
> HDFS-8766.HDFS-8707.003.patch, HDFS-8766.HDFS-8707.004.patch, 
> HDFS-8766.HDFS-8707.005.patch
>
>
> Add a synchronous API that is compatible with the hdfs.h header used in 
> libhdfs and libhdfs3.  This will make it possible for projects using 
> libhdfs/libhdfs3 to relink against libhdfspp with minimal changes.
> This also provides a pure C interface that can be linked against projects 
> that aren't built in C++11 mode for various reasons but use the same 
> compiler.  It also allows many other programming languages to access 
> libhdfspp through builtin FFI interfaces.
> The libhdfs API is very similar to the posix file API which makes it easier 
> for programs built using posix filesystem calls to be modified to access HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9246) TestGlobPaths#pTestCurlyBracket is failing

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958159#comment-14958159
 ] 

Mingliang Liu commented on HDFS-9246:
-

The test failure can be reproduced locally on Linux and Mac.

> TestGlobPaths#pTestCurlyBracket is failing
> --
>
> Key: HDFS-9246
> URL: https://issues.apache.org/jira/browse/HDFS-9246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> com.google.re2j.PatternSyntaxException: error parsing regexp: Unclosed group 
> at pos 10: `myuser}{bc`
>   at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:168)
>   at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:154)
>   at org.apache.hadoop.fs.GlobPattern.(GlobPattern.java:42)
>   at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:67)
>   at org.apache.hadoop.fs.GlobFilter.(GlobFilter.java:50)
>   at org.apache.hadoop.fs.Globber.doGlob(Globber.java:209)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1664)
>   at 
> org.apache.hadoop.fs.TestGlobPaths.prepareTesting(TestGlobPaths.java:758)
>   at 
> org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket(TestGlobPaths.java:724)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958157#comment-14958157
 ] 

Mingliang Liu commented on HDFS-4015:
-

One quick question:
Consider the name node is in extension period (startup safe mode), the operator 
sets the safe mode manually. When the operator makes the name node leave safe 
mode manually, the {{-force}} option is not checked, even if there are orphaned 
blocks. Is this possible? If true, is it expected?

Another minor comment is that the following code may be re-used:
{code}
+  LOG.error("Refusing to leave safe mode without a force flag. " +
+  "Exiting safe mode will cause a deletion of " + blockManager
+  .getBytesInFuture() + " byte(s). Please use " +
+  "-forceExit flag to exit safe mode forcefully and data loss is " 
+
+  "acceptable.");
{code}

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958141#comment-14958141
 ] 

Hadoop QA commented on HDFS-6101:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  10m 43s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |  10m 36s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 21s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 58s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 57s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 43s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 19s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 21s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  63m 30s | Tests failed in hadoop-hdfs. |
| | |  94m 32s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
| Timed out tests | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766672/HDFS-6101.003.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / be7a0ad |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12993/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12993/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12993/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12993/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12993/console |


This message was automatically generated.

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, HDFS-6101.002.patch, 
> HDFS-6101.003.patch, TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958133#comment-14958133
 ] 

Hadoop QA commented on HDFS-9210:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 30s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  5s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 34s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 17s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 24s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  53m 14s | Tests failed in hadoop-hdfs. |
| | | 100m  3s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766652/HDFS-9210.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / be7a0ad |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12992/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12992/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12992/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12992/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12992/console |


This message was automatically generated.

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958125#comment-14958125
 ] 

Hadoop QA commented on HDFS-9188:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   8m  3s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 14 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  2s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  53m  0s | Tests failed in hadoop-hdfs. |
| | |  76m 26s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/1275/HDFS-9188.006.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / be7a0ad |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12995/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12995/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12995/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12995/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12995/console |


This message was automatically generated.

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch, HDFS-9188.006.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9246) TestGlobPaths#pTestCurlyBracket is failing

2015-10-14 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-9246:
--

 Summary: TestGlobPaths#pTestCurlyBracket is failing
 Key: HDFS-9246
 URL: https://issues.apache.org/jira/browse/HDFS-9246
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


com.google.re2j.PatternSyntaxException: error parsing regexp: Unclosed group at 
pos 10: `myuser}{bc`
at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:168)
at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:154)
at org.apache.hadoop.fs.GlobPattern.(GlobPattern.java:42)
at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:67)
at org.apache.hadoop.fs.GlobFilter.(GlobFilter.java:50)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:209)
at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1664)
at 
org.apache.hadoop.fs.TestGlobPaths.prepareTesting(TestGlobPaths.java:758)
at 
org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket(TestGlobPaths.java:724)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9184) Logging HDFS operation's caller context into audit logs

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9184:

Attachment: HDFS-9184.006.patch

The v6 patch fixes checkstyle and findbugs warnings.

> Logging HDFS operation's caller context into audit logs
> ---
>
> Key: HDFS-9184
> URL: https://issues.apache.org/jira/browse/HDFS-9184
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9184.000.patch, HDFS-9184.001.patch, 
> HDFS-9184.002.patch, HDFS-9184.003.patch, HDFS-9184.004.patch, 
> HDFS-9184.005.patch, HDFS-9184.006.patch
>
>
> For a given HDFS operation (e.g. delete file), it's very helpful to track 
> which upper level job issues it. The upper level callers may be specific 
> Oozie tasks, MR jobs, and hive queries. One scenario is that the namenode 
> (NN) is abused/spammed, the operator may want to know immediately which MR 
> job should be blamed so that she can kill it. To this end, the caller context 
> contains at least the application-dependent "tracking id".
> There are several existing techniques that may be related to this problem.
> 1. Currently the HDFS audit log tracks the users of the the operation which 
> is obviously not enough. It's common that the same user issues multiple jobs 
> at the same time. Even for a single top level task, tracking back to a 
> specific caller in a chain of operations of the whole workflow (e.g.Oozie -> 
> Hive -> Yarn) is hard, if not impossible.
> 2. HDFS integrated {{htrace}} support for providing tracing information 
> across multiple layers. The span is created in many places interconnected 
> like a tree structure which relies on offline analysis across RPC boundary. 
> For this use case, {{htrace}} has to be enabled at 100% sampling rate which 
> introduces significant overhead. Moreover, passing additional information 
> (via annotations) other than span id from root of the tree to leaf is a 
> significant additional work.
> 3. In [HDFS-4680 | https://issues.apache.org/jira/browse/HDFS-4680], there 
> are some related discussion on this topic. The final patch implemented the 
> tracking id as a part of delegation token. This protects the tracking 
> information from being changed or impersonated. However, kerberos 
> authenticated connections or insecure connections don't have tokens. 
> [HADOOP-8779] proposes to use tokens in all the scenarios, but that might 
> mean changes to several upstream projects and is a major change in their 
> security implementation.
> We propose another approach to address this problem. We also treat HDFS audit 
> log as a good place for after-the-fact root cause analysis. We propose to put 
> the caller id (e.g. Hive query id) in threadlocals. Specially, on client side 
> the threadlocal object is passed to NN as a part of RPC header (optional), 
> while on sever side NN retrieves it from header and put it to {{Handler}}'s 
> threadlocals. Finally in {{FSNamesystem}}, HDFS audit logger will record the 
> caller context for each operation. In this way, the existing code is not 
> affected.
> It is still challenging to keep "lying" client from abusing the caller 
> context. Our proposal is to add a {{signature}} field to the caller context. 
> The client choose to provide its signature along with the caller id. The 
> operator may need to validate the signature at the time of offline analysis. 
> The NN is not responsible for validating the signature online.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958104#comment-14958104
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1268 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1268/])
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958105#comment-14958105
 ] 

Hudson commented on HDFS-9223:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1268 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1268/])
HDFS-9223. Code cleanup for DatanodeDescriptor and HeartbeatManager. (jing9: 
rev be7a0add8b6561d3c566237cc0370b06e7f32bb4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java


> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958086#comment-14958086
 ] 

Hudson commented on HDFS-9223:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8638 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8638/])
HDFS-9223. Code cleanup for DatanodeDescriptor and HeartbeatManager. (jing9: 
rev be7a0add8b6561d3c566237cc0370b06e7f32bb4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java


> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958072#comment-14958072
 ] 

Arpit Agarwal commented on HDFS-4015:
-

Thanks [~anu]. +1 pending Jenkins. Will hold off committing until tomorrow in 
case [~liuml07] wants to take another look.

bq. The new patch fixes that and also updates how RollBack is detected based on 
off-line comments.
FTR we felt detecting rollback from the  startup option was safer than 
overloading the meaning of {{shouldPostponeBlocksFromFuture}}.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9245:

Target Version/s: 2.8.0

> Fix findbugs warnings in hdfs-nfs/WriteCtx
> --
>
> Key: HDFS-9245
> URL: https://issues.apache.org/jira/browse/HDFS-9245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> There are findbugs warnings as follows, brought by [HDFS-9092].
> It seems fine to ignore them by write a filter rule in the 
> {{findbugsExcludeFile.xml}} file. 
> {code:xml}
>  instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
> {code}
> and
> {code:xml}
>  instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
>  name="originalCount" primary="true" signature="I">
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java">
> In WriteCtx.java
> 
> 
> Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9245:

Description: 
There are findbugs warnings as follows, brought by [HDFS-9092].

It seems fine to ignore them by write a filter rule in the 
{{findbugsExcludeFile.xml}} file. 

{code:xml}

Inconsistent synchronization

Inconsistent synchronization of 
org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time



At WriteCtx.java:[lines 40-314]

In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx

{code}

and

{code:xml}

Inconsistent synchronization

Inconsistent synchronization of 
org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time



At WriteCtx.java:[lines 40-314]

In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx



In WriteCtx.java


Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount


{code}

  was:
There is findbugs warnings as follows, which were brought by [HDFS-9092].

It seems fine to ignore them by write a filter rule in the 
{{findbugsExcludeFile.xml}} file. 

{code:xml}

Inconsistent synchronization

Inconsistent synchronization of 
org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time



At WriteCtx.java:[lines 40-314]

In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx

{code}

and

{code:xml}

Inconsistent synchronization

Inconsistent synchronization of 
org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time



At WriteCtx.java:[lines 40-314]

In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx



In WriteCtx.java


Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount


{code}


> Fix findbugs warnings in hdfs-nfs/WriteCtx
> --
>
> Key: HDFS-9245
> URL: https://issues.apache.org/jira/browse/HDFS-9245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> There are findbugs warnings as follows, brought by [HDFS-9092].
> It seems fine to ignore them by write a filter rule in the 
> {{findbugsExcludeFile.xml}} file. 
> {code:xml}
>  instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
> {code}
> and
> {code:xml}
>  instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
>  name="originalCount" primary="true" signature="I">
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java">
> In WriteCtx.java
> 
> 
> Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9092) Nfs silently drops overlapping write requests and causes data copying to fail

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958062#comment-14958062
 ] 

Mingliang Liu commented on HDFS-9092:
-

There are new findbugs warnings in the last patch. Please comment in 
[HDFS-9245] that the findbugs warnings can be ignored.

> Nfs silently drops overlapping write requests and causes data copying to fail
> -
>
> Key: HDFS-9092
> URL: https://issues.apache.org/jira/browse/HDFS-9092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-9092.001.patch, HDFS-9092.002.patch
>
>
> When NOT using 'sync' option, the NFS writes may issue the following warning:
> org.apache.hadoop.hdfs.nfs.nfs3.OpenFileCtx: Got an overlapping write 
> (1248751616, 1249677312), nextOffset=1248752400. Silently drop it now
> and the size of data copied via NFS will stay at 1248752400.
> Found what happened is:
> 1. The write requests from client are sent asynchronously. 
> 2. The NFS gateway has handler to handle the incoming requests by creating an 
> internal write request structuire and put it into cache;
> 3. In parallel, a separate thread in NFS gateway takes requests out from the 
> cache and writes the data to HDFS.
> The current offset is how much data has been written by the write thread in 
> 3. The detection of overlapping write request happens in 2, but it only 
> checks the write request against the curent offset, and trim the request if 
> necessary. Because the write requests are sent asynchronously, if two 
> requests are beyond the current offset, and they overlap, it's not detected 
> and both are put into the cache. This cause the symptom reported in this case 
> at step 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-10-14 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-9245:
---

 Summary: Fix findbugs warnings in hdfs-nfs/WriteCtx
 Key: HDFS-9245
 URL: https://issues.apache.org/jira/browse/HDFS-9245
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Mingliang Liu
Assignee: Mingliang Liu


There is findbugs warnings as follows, which were brought by [HDFS-9092].

It seems fine to ignore them by write a filter rule in the 
{{findbugsExcludeFile.xml}} file. 

{code:xml}

Inconsistent synchronization

Inconsistent synchronization of 
org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time



At WriteCtx.java:[lines 40-314]

In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx

{code}

and

{code:xml}

Inconsistent synchronization

Inconsistent synchronization of 
org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time



At WriteCtx.java:[lines 40-314]

In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx



In WriteCtx.java


Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount


{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9173) Erasure Coding: Lease recovery for striped file

2015-10-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958058#comment-14958058
 ] 

Jing Zhao commented on HDFS-9173:
-

Thanks for updating the patch, Walter. Some minors on the latest patch:
# The label of {{ecPolicy}} should be 4.
{code}
   optional BlockProto truncateBlock = 3;  // New block for recovery (truncate)
+
+  optional ErasureCodingPolicyProto ecPolicy = 5;
{code}
# BlockRecord does not need to be public
# We can have more unit tests to cover different combinations of replica states 
(Finalized, Rbw, etc.) and lengths. Some tests with randomness can also help. 
This can be done in a separate jira.

> Erasure Coding: Lease recovery for striped file
> ---
>
> Key: HDFS-9173
> URL: https://issues.apache.org/jira/browse/HDFS-9173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9173.00.wip.patch, HDFS-9173.01.patch, 
> HDFS-9173.02.step125.patch, HDFS-9173.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958039#comment-14958039
 ] 

Hudson commented on HDFS-9238:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2480 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2480/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958040#comment-14958040
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2480 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2480/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6589) TestDistributedFileSystem.testAllWithNoXmlDefaults failed intermittently

2015-10-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958009#comment-14958009
 ] 

Wei-Chiu Chuang commented on HDFS-6589:
---

Hi [~yzhangal] thanks for reporting this bug,
it looks like a race condition issue. I am assigning this JIRA to myself and am 
trying to reproduce the bug.

> TestDistributedFileSystem.testAllWithNoXmlDefaults failed intermittently
> 
>
> Key: HDFS-6589
> URL: https://issues.apache.org/jira/browse/HDFS-6589
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Wei-Chiu Chuang
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/7207 is clean
> https://builds.apache.org/job/PreCommit-HDFS-Build/7208 has the following 
> failure. The code is essentially the same.
> Running the same test locally doesn't reproduce. A flaky test there.
> {code}
> Stacktrace
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:263)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testAllWithNoXmlDefaults(TestDistributedFileSystem.java:651)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958010#comment-14958010
 ] 

Xiao Chen commented on HDFS-9231:
-

The pre-patch Findbugs warning is unrelated. (see HDFS-9242)
The checkstyle issues are not introduced by this patch.
The test failures are unrelated, and passed locally.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6589) TestDistributedFileSystem.testAllWithNoXmlDefaults failed intermittently

2015-10-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-6589:
-

Assignee: Wei-Chiu Chuang

> TestDistributedFileSystem.testAllWithNoXmlDefaults failed intermittently
> 
>
> Key: HDFS-6589
> URL: https://issues.apache.org/jira/browse/HDFS-6589
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Wei-Chiu Chuang
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/7207 is clean
> https://builds.apache.org/job/PreCommit-HDFS-Build/7208 has the following 
> failure. The code is essentially the same.
> Running the same test locally doesn't reproduce. A flaky test there.
> {code}
> Stacktrace
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:263)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testAllWithNoXmlDefaults(TestDistributedFileSystem.java:651)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-4015:
---
Attachment: HDFS-4015.005.patch

Hi [~arpitagarwal], Thanks for the review. Good catch on (newer client + older 
namenode). The new patch fixes that and also updates how RollBack is detected 
based on off-line comments.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch, HDFS-4015.005.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958003#comment-14958003
 ] 

Hudson commented on HDFS-9238:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/531/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9223:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks again for the review, Nicholas! I've updated the patch based on your 
comments and committed it to trunk and branch-2.8.

> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5175) Provide clients a way to set IP header bits on connections

2015-10-14 Thread Maysam Yabandeh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958002#comment-14958002
 ] 

Maysam Yabandeh commented on HDFS-5175:
---

Any update on this patch? Our network infrastructure showed interest in 
labeling hadoop data traffic, and this jira could have been used for this 
purpose. I guess other companies must have similar applications that could 
benefit from this patch.

> Provide clients a way to set IP header bits on connections
> --
>
> Key: HDFS-5175
> URL: https://issues.apache.org/jira/browse/HDFS-5175
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.5-alpha
>Reporter: Lohit Vijayarenu
>
> It would be very helpful if we had ability for clients to set IP headers when 
> they make socket connections for data transfers. We were looking into setting 
> up QoS using DSCP bit and saw that there is no easy way to let clients pass 
> down a specific value when clients make connection to DataNode.
> As a quick fix we did something similar to io.file.buffer.size where client 
> could pass down DSCP integer value and when DFSClient opens a stream, it 
> could set the value on socket using setTrafficClass
> Opening this JIRA to get more inputs from others who have had experience and 
> might have already thought about this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14958004#comment-14958004
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/531/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957997#comment-14957997
 ] 

Mingliang Liu commented on HDFS-9242:
-

I think the warning is false positive. We can write a filter rule in the 
{{dev-support/findbugsExcludeFile.xml}} file. 

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957986#comment-14957986
 ] 

Hudson commented on HDFS-9238:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2434 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2434/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957979#comment-14957979
 ] 

Hudson commented on HDFS-9210:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1267/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957978#comment-14957978
 ] 

Hudson commented on HDFS-9238:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1267 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1267/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957975#comment-14957975
 ] 

Hadoop QA commented on HDFS-9220:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m  9s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 26s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 32s | The applied patch generated  1 
new checkstyle issues (total was 60, now 60). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  53m 52s | Tests failed in hadoop-hdfs. |
| | | 106m 11s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.web.TestWebHDFSXAttr |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.web.resources.TestParam |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
| Timed out tests | org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | org.apache.hadoop.hdfs.web.TestWebHDFSForHA |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766603/HDFS-9220.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3d50855 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12989/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12989/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12989/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12989/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12989/console |


This message was automatically generated.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, 
> HDFS-9220.002.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.
> More generally, the failure happens when reading from the last block of a 
> file and the last block has <= 512 bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957961#comment-14957961
 ] 

Hadoop QA commented on HDFS-9231:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 11s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  6s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 27s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 26s | The applied patch generated  3 
new checkstyle issues (total was 370, now 371). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 25s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  50m 13s | Tests failed in hadoop-hdfs. |
| | |  96m 48s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766629/HDFS-9231.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3d50855 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12990/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12990/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12990/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12990/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12990/console |


This message was automatically generated.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957955#comment-14957955
 ] 

Hadoop QA commented on HDFS-9223:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  19m 30s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 29s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 24s | The applied patch generated  4 
new checkstyle issues (total was 328, now 318). |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  56m 24s | Tests failed in hadoop-hdfs. |
| | | 104m  9s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766595/HDFS-9223.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ba3c197 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/console |


This message was automatically generated.

> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9129) Move the safemode block count into BlockManager

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9129:

Attachment: HDFS-9129.003.patch

In v3 patch, the {{Namesystem}} will not leave *STARTUP* to *OFF* state. 
Instead, it asks the {{BlockManager}} every time when its 
{{isInStartupSafeMode}} is called. This way, we are able to simplify 
{{Namesystem}}'s state machine by delegating all *STARTUP* safe mode logic, 
which includes leaving safe mode, to {{BlockManager}}, and to 
{{BlockManagerSafeMode}}.

V3 patch is for Jenkins. Please hold on before reviewing it.

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch, 
> HDFS-9129.002.patch, HDFS-9129.003.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-10-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-6101:
--
Attachment: HDFS-6101.003.patch

Thanks [~arpitagarwal] for the review. Here's the code with updated code style. 
Kind of surprised the tool did not pick up these white space issues.

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, HDFS-6101.002.patch, 
> HDFS-6101.003.patch, TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957920#comment-14957920
 ] 

Hudson commented on HDFS-9238:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #544 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/544/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957921#comment-14957921
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #544 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/544/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9188:

Attachment: HDFS-9188.006.patch

Thanks a lot for the reviews, [~cmccabe]. 

bq. You need to set UTF8 as the encoding.
Fixed.

The test failures are not relevant. 


> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch, HDFS-9188.006.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957912#comment-14957912
 ] 

Hadoop QA commented on HDFS-9241:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |  10m 15s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 29s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m  0s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 45s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | client tests |   0m 17s | Tests passed in 
hadoop-client. |
| | |  47m 43s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766630/HDFS-9241.000.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 3d50855 |
| hadoop-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12991/artifact/patchprocess/testrun_hadoop-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12991/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12991/console |


This message was automatically generated.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957872#comment-14957872
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8637 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8637/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Status: Patch Available  (was: Reopened)

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Attachment: HDFS-9210.01.patch

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reopened HDFS-9210:
--

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957859#comment-14957859
 ] 

Xiaoyu Yao commented on HDFS-9210:
--

Thanks [~templedf], I will revert the commit and fix it.


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957805#comment-14957805
 ] 

Daniel Templeton commented on HDFS-9210:


I'm confused.  Looking at the patch, this:

{code}
p.append(String.format("Block scanner information for volume %s with base" +
" path %s%n" + volume.getStorageID(), volume.getBasePath()));
{code}

doesn't work.  It should be:

{code}
p.append(String.format("Block scanner information for volume %s with base" +
" path %s%n", volume.getStorageID(), volume.getBasePath()));
{code}

How is the patch not causing a java.util.MissingFormatArgumentException to be 
thrown?

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9244) Support nested encryption zones

2015-10-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9244:


 Summary: Support nested encryption zones
 Key: HDFS-9244
 URL: https://issues.apache.org/jira/browse/HDFS-9244
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Xiaoyu Yao


This JIRA is opened to track adding support of nested encryption zone based on 
[~andrew.wang]'s [comment 
|https://issues.apache.org/jira/browse/HDFS-8747?focusedCommentId=14654141&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14654141]
 for certain use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957798#comment-14957798
 ] 

Hadoop QA commented on HDFS-9188:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   7m 53s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 15 new or modified test files. |
| {color:green}+1{color} | javac |   8m  5s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 30s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  4s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  49m 17s | Tests failed in hadoop-hdfs. |
| | |  72m 39s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766622/HDFS-9188.005.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / dfa7848 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12987/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12987/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12987/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12987/console |


This message was automatically generated.

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8831) Trash Support for files in HDFS encryption zone

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8831:
-
Summary: Trash Support for files in HDFS encryption zone  (was: Support 
"Soft Delete" for files under HDFS encryption zone)

> Trash Support for files in HDFS encryption zone
> ---
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of 
> the file being deleted appropriately to the same encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Labels:   (was: reviewed)

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

The test failures are unrelated. Thanks [~andrew.wang] for the review! 
Commit it to trunk and branch-2 based on his +1. 


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: reviewed
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Labels: reviewed  (was: )

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: reviewed
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-10-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957759#comment-14957759
 ] 

Arpit Agarwal commented on HDFS-6101:
-

Thanks [~jojochuang]. I can't repro the failure anymore but your fix looks 
valid.

Can you fix the coding style issues? e.g. Space before {{\{}} in {{synchronized 
(this)\{}}, remove extra spaces in {{synchronized ( writer )}} etc.? 

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, HDFS-6101.002.patch, 
> TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957754#comment-14957754
 ] 

Colin Patrick McCabe commented on HDFS-9188:


Thanks, [~eddyxu].

{code}
1533for(MaterializedReplica replica : replicas) {
1534  replica.corruptData(sb.toString().getBytes());
1535}
{code}
You need to set UTF8 as the encoding.

+1 once that's resolved and pending jenkins

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957735#comment-14957735
 ] 

Hudson commented on HDFS-9238:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8636 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8636/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957733#comment-14957733
 ] 

Hadoop QA commented on HDFS-9226:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 22s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |  10m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 25s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 32s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 51s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 44s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | minicluster tests |   0m 19s | Tests passed in 
hadoop-minicluster. |
| | |  48m 53s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766619/HDFS-9226.005.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / dfa7848 |
| hadoop-minicluster test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12986/artifact/patchprocess/testrun_hadoop-minicluster.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12986/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12986/console |


This message was automatically generated.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProvide

[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957711#comment-14957711
 ] 

Mingliang Liu commented on HDFS-9241:
-

Thanks for reporting this [~steve_l]. I'll update the patch accordingly.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957705#comment-14957705
 ] 

Steve Loughran commented on HDFS-9241:
--

lets see what happens in the discussion before voting on this

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8947) NameNode, DataNode and NFS gateway to support JvmPauseMonitor as a service

2015-10-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-8947:
-
Status: Open  (was: Patch Available)

> NameNode, DataNode and NFS gateway to support JvmPauseMonitor as a service
> --
>
> Key: HDFS-8947
> URL: https://issues.apache.org/jira/browse/HDFS-8947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, nfs
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Minor
> Attachments: 0001-HDFS-8947.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>
> As JvmPauseMonitor is made as an AbstractService, subsequent method changes 
> are needed in all places which uses the monitor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8947) NameNode, DataNode and NFS gateway to support JvmPauseMonitor as a service

2015-10-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-8947:
-
Status: Patch Available  (was: Open)

> NameNode, DataNode and NFS gateway to support JvmPauseMonitor as a service
> --
>
> Key: HDFS-8947
> URL: https://issues.apache.org/jira/browse/HDFS-8947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, nfs
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Minor
> Attachments: 0001-HDFS-8947.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>
> As JvmPauseMonitor is made as an AbstractService, subsequent method changes 
> are needed in all places which uses the monitor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957682#comment-14957682
 ] 

Hadoop QA commented on HDFS-9220:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  24m 57s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m 16s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 50s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 52s | The applied patch generated  1 
new checkstyle issues (total was 60, now 60). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:red}-1{color} | eclipse:eclipse |   0m 35s | The patch failed to build 
with eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 50s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 32s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  53m 15s | Tests failed in hadoop-hdfs. |
| | | 110m 17s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766603/HDFS-9220.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 56dc777 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/console |


This message was automatically generated.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, 
> HDFS-9220.002.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.
> More generally, the failure happens when reading from the last block of a 
> file and the last block has <= 512 bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9241:

Attachment: HDFS-9241.000.patch

As discussion in common-dev@, we may fix this by make the {{hadoop-client}} 
depend on {{hadoop-hdfs}} instead of {{hadoop-hdfs-client}}. Downstream users 
are free to exclude the server code by making it depend on 
{{hadoop-hdfs-client}} for thin dependency.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9241:

Affects Version/s: (was: 3.0.0)
   Status: Patch Available  (was: Open)

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Attachment: HDFS-9231.003.patch

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Status: Patch Available  (was: Open)

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14957678#comment-14957678
 ] 

Hadoop QA commented on HDFS-8647:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m 14s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 26s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m  4s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  5s | The patch has 6  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   6m 42s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  74m 39s | Tests failed in hadoop-hdfs. |
| | | 131m 18s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.TestLargeBlock |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
| Timed out tests | org.apache.hadoop.hdfs.TestReplication |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766598/HDFS-8647-006.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0d77e85 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/console |


This message was automatically generated.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Attachment: HDFS-9231.003.patch

Patch 003 addresses 1 checkstyle warning that's related.
Other warning/errors are unrelated.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >