[jira] [Commented] (HDFS-8357) Consolidate parameters of INode.CleanSubtree() into a parameter objects.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537047#comment-14537047
 ] 

Hudson commented on HDFS-8357:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7788 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7788/])
HDFS-8357. Consolidate parameters of INode.CleanSubtree() into a parameter 
objects. Contributed by Li Lu. (wheat9: rev 
4536399d47f6c061e149e2504600804a0f1e093d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java


> Consolidate parameters of INode.CleanSubtree() into a parameter objects.
> 
>
> Key: HDFS-8357
> URL: https://issues.apache.org/jira/browse/HDFS-8357
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 2.8.0
>
> Attachments: HDFS-8357-trunk.001.patch, HDFS-8357-trunk.002.patch
>
>
> {{INode.CleanSubtree()}} takes multiple parameters including 
> BlockStoragePolicySuite, removedBlocks and removedINodes. These parameters 
> are pass multiple layers down the call chains.
> This jira proposes to refactor them into a parameter object so that it is 
> easier to make changes like HDFS-6757.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8357) Consolidate parameters of INode.CleanSubtree() into a parameter objects.

2015-05-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8357:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~gtCarrera9] for the 
contribution.

> Consolidate parameters of INode.CleanSubtree() into a parameter objects.
> 
>
> Key: HDFS-8357
> URL: https://issues.apache.org/jira/browse/HDFS-8357
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 2.8.0
>
> Attachments: HDFS-8357-trunk.001.patch, HDFS-8357-trunk.002.patch
>
>
> {{INode.CleanSubtree()}} takes multiple parameters including 
> BlockStoragePolicySuite, removedBlocks and removedINodes. These parameters 
> are pass multiple layers down the call chains.
> This jira proposes to refactor them into a parameter object so that it is 
> easier to make changes like HDFS-6757.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8357) Consolidate parameters of INode.CleanSubtree() into a parameter objects.

2015-05-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537044#comment-14537044
 ] 

Haohui Mai commented on HDFS-8357:
--

The test failures are unrelated. +1. I'll commit shortly.

> Consolidate parameters of INode.CleanSubtree() into a parameter objects.
> 
>
> Key: HDFS-8357
> URL: https://issues.apache.org/jira/browse/HDFS-8357
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HDFS-8357-trunk.001.patch, HDFS-8357-trunk.002.patch
>
>
> {{INode.CleanSubtree()}} takes multiple parameters including 
> BlockStoragePolicySuite, removedBlocks and removedINodes. These parameters 
> are pass multiple layers down the call chains.
> This jira proposes to refactor them into a parameter object so that it is 
> easier to make changes like HDFS-6757.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8357) Consolidate parameters of INode.CleanSubtree() into a parameter objects.

2015-05-09 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536997#comment-14536997
 ] 

Li Lu commented on HDFS-8357:
-

hadoop.hdfs.TestLeaseRecovery2 passed locally. The other two UTs are still 
failing with trunk. 

> Consolidate parameters of INode.CleanSubtree() into a parameter objects.
> 
>
> Key: HDFS-8357
> URL: https://issues.apache.org/jira/browse/HDFS-8357
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HDFS-8357-trunk.001.patch, HDFS-8357-trunk.002.patch
>
>
> {{INode.CleanSubtree()}} takes multiple parameters including 
> BlockStoragePolicySuite, removedBlocks and removedINodes. These parameters 
> are pass multiple layers down the call chains.
> This jira proposes to refactor them into a parameter object so that it is 
> easier to make changes like HDFS-6757.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8335) FSNamesystem/FSDirStatAndListingOp getFileInfo and getListingInt construct FSPermissionChecker regardless of isPermissionEnabled()

2015-05-09 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536939#comment-14536939
 ] 

Gabor Liptak commented on HDFS-8335:


https://builds.apache.org/job/PreCommit-HDFS-Build/10893/consoleText

says the compile failed, but doesn't seem to have the actual compilation error 
listed ...

> FSNamesystem/FSDirStatAndListingOp getFileInfo and getListingInt construct 
> FSPermissionChecker regardless of isPermissionEnabled()
> --
>
> Key: HDFS-8335
> URL: https://issues.apache.org/jira/browse/HDFS-8335
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.5.0, 2.6.0, 2.7.0, 2.8.0
>Reporter: David Bryson
> Attachments: HDFS-8335.patch
>
>
> FSNamesystem (2.5.x)/FSDirStatAndListingOp(current trunk) getFileInfo and 
> getListingInt methods call getPermissionChecker() to construct a 
> FSPermissionChecker regardless of isPermissionEnabled(). When permission 
> checking is disabled, this leads to an unnecessary performance hit 
> constructing a UserGroupInformation object that is never used.
> For example, from a stack dump when driving concurrent requests, they all end 
> up blocking.
> Here's the thread holding the lock:
> "IPC Server handler 9 on 9000" daemon prio=10 tid=0x7f78d8b9e800 
> nid=0x142f3 runnable [0x7f78c2ddc000]
>java.lang.Thread.State: RUNNABLE
> at java.io.FileInputStream.readBytes(Native Method)
> at java.io.FileInputStream.read(FileInputStream.java:272)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> - locked <0x0007d9b105c0> (a java.lang.UNIXProcess$ProcessPipeInputStream)
> at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
> at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
> - locked <0x0007d9b1a888> (a java.io.InputStreamReader)
> at java.io.InputStreamReader.read(InputStreamReader.java:184)
> at java.io.BufferedReader.fill(BufferedReader.java:154)
> at java.io.BufferedReader.read1(BufferedReader.java:205)
> at java.io.BufferedReader.read(BufferedReader.java:279)
> - locked <0x0007d9b1a888> (a java.io.InputStreamReader)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:524)
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:774)
> at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:84)
> at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
> at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1474)
> - locked <0x0007a6df75f8> (a 
> org.apache.hadoop.security.UserGroupInformation)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:82)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3534)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4478)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:898)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:602)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> Here is (one of the many) threads waiting on the lock:
> "IPC Server handler 2 on 9000" daem

[jira] [Commented] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536711#comment-14536711
 ] 

Hudson commented on HDFS-8311:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8311. DataStreamer.transfer() should timeout the socket InputStream. 
(Esteban Gutierrez via Yongjun Zhang) (yzhang: rev 
730f9930a48259f34e48404aee51e8d641cc3d36)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> DataStreamer.transfer() should timeout the socket InputStream.
> --
>
> Key: HDFS-8311
> URL: https://issues.apache.org/jira/browse/HDFS-8311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.8.0
>
> Attachments: 
> 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, 
> HDFS-8311.001.patch
>
>
> While validating some HA failure modes we found that HDFS clients can take a 
> long time to recover or sometimes don't recover at all since we don't setup 
> the socket timeout in the InputStream:
> {code}
> private void transfer () { ...
> ...
>  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
>  InputStream unbufIn = NetUtils.getInputStream(sock);
> ...
> }
> {code}
> The InputStream should have its own timeout in the same way as the 
> OutputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536708#comment-14536708
 ] 

Hudson commented on HDFS-8346:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8346. libwebhdfs build fails during link due to unresolved external 
symbols. Contributed by Chris Nauroth. (wheat9: rev 
f4ebbc6afc1297dced54bd2bd671e587c4ceb2fc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.c
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.h


> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8274) NFS configuration nfs.dump.dir not working

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536706#comment-14536706
 ] 

Hudson commented on HDFS-8274:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8274. NFS configuration nfs.dump.dir not working (Contributed by Ajith S) 
(arp: rev cd6b26cce7457d08346b9d90a5f2f333ba4202d8)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java


> NFS configuration nfs.dump.dir not working
> --
>
> Key: HDFS-8274
> URL: https://issues.apache.org/jira/browse/HDFS-8274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8274.patch
>
>
> As per the document 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> we can configure
> {quote} 
> nfs.dump.dir
> {quote}
> as nfs file dump directory, but using this configuration in *hdfs-site.xml* 
> doesn't work and when nfs gateway is started, default location is used i.e 
> \tmp\.hdfs-nfs
> The reason being the key expected in *NfsConfigKeys.java*
> {code}
> public static final String DFS_NFS_FILE_DUMP_DIR_KEY = "nfs.file.dump.dir";
> {code}
> we can change it to *nfs.dump.dir* instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8326) Documentation about when checkpoints are run is out of date

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536713#comment-14536713
 ] 

Hudson commented on HDFS-8326:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8326. Documentation about when checkpoints are run is out of date. (Misty 
Stanley-Jones via xyao) (xyao: rev d0e75e60fb16ffd6c95648a06ff3958722f71e4d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Documentation about when checkpoints are run is out of date
> ---
>
> Key: HDFS-8326
> URL: https://issues.apache.org/jira/browse/HDFS-8326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.8.0
>
> Attachments: HDFS-8326.001.patch, HDFS-8326.002.patch, 
> HDFS-8326.003.patch, HDFS-8326.004.patch, HDFS-8326.patch
>
>
> Apparently checkpointing by interval or transaction size are both supported 
> in at least HDFS 2.3, but the documentation does not reflect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8340) Fix NFS documentation of nfs.wtmax

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536692#comment-14536692
 ] 

Hudson commented on HDFS-8340:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8340. Fix NFS documentation of nfs.wtmax. (Contributed by Ajith S) (arp: 
rev a2d40bced9f793c8c4193f1447425ca7f3f8f357)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix NFS documentation of nfs.wtmax
> --
>
> Key: HDFS-8340
> URL: https://issues.apache.org/jira/browse/HDFS-8340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Minor
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8340.patch
>
>
> According to documentation
> http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> bq. For larger data transfer size, one needs to update “nfs.rtmax” and 
> “nfs.rtmax” in hdfs-site.xml.
> nfs.rtmax is mentioned twice, instead it should be  “nfs.rtmax” and 
> “nfs.wtmax” 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8284) Update documentation about how to use HTrace with HDFS

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536690#comment-14536690
 ] 

Hudson commented on HDFS-8284:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8284. Update documentation about how to use HTrace with HDFS (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
8f7c2364d7254a1d987b095ba442bf20727796f8)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update documentation about how to use HTrace with HDFS
> --
>
> Key: HDFS-8284
> URL: https://issues.apache.org/jira/browse/HDFS-8284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8284.001.patch, HDFS-8284.002.patch, 
> HDFS-8284.003.patch
>
>
> Tracing originated in DFSClient uses configuration keys prefixed with 
> "dfs.client.htrace" after HDFS-8213. Server side tracing uses conf keys 
> prefixed with "dfs.htrace".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7433) Optimize performance of DatanodeManager's node map

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536688#comment-14536688
 ] 

Hudson commented on HDFS-7433:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-7433. Optimize performance of DatanodeManager's node map. Contributed by 
Daryn Sharp. (kihwal: rev 7a7960be41c32f20ffec9fea811878b113da62db)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Optimize performance of DatanodeManager's node map
> --
>
> Key: HDFS-7433
> URL: https://issues.apache.org/jira/browse/HDFS-7433
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7433.patch, HDFS-7433.patch, HDFS-7433.patch, 
> HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8113) Add check for null BlockCollection pointers in BlockInfoContiguous structures

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536722#comment-14536722
 ] 

Hudson commented on HDFS-8113:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8113. Add check for null BlockCollection pointers in BlockInfoContiguous 
structures (Chengbing Liu via Colin P. McCabe) (cmccabe: rev 
f523e963e4d88e4e459352387c6efeab59e7a809)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add check for null BlockCollection pointers in BlockInfoContiguous structures
> -
>
> Key: HDFS-8113
> URL: https://issues.apache.org/jira/browse/HDFS-8113
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8113.02.patch, HDFS-8113.patch
>
>
> The following copy constructor can throw NullPointerException if {{bc}} is 
> null.
> {code}
>   protected BlockInfoContiguous(BlockInfoContiguous from) {
> this(from, from.bc.getBlockReplication());
> this.bc = from.bc;
>   }
> {code}
> We have observed that some DataNodes keeps failing doing block reports with 
> NameNode. The stacktrace is as follows. Though we are not using the latest 
> version, the problem still exists.
> {quote}
> 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> RemoteException in offerService
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
> at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8327) Simplify quota calculations for snapshots and truncate

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536710#comment-14536710
 ] 

Hudson commented on HDFS-8327:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8327. Compute storage type quotas in 
INodeFile.computeQuotaDeltaForTruncate(). Contributed by Haohui Mai. (wheat9: 
rev 02a4a22b9c0e22c2e7dd6ec85edd5c5a167fe19f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java


> Simplify quota calculations for snapshots and truncate
> --
>
> Key: HDFS-8327
> URL: https://issues.apache.org/jira/browse/HDFS-8327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8327.000.patch, HDFS-8327.001.patch, 
> HDFS-8327.002.patch, HDFS-8327.003.patch, HDFS-8327.004.patch
>
>
> To simplify the code {{INodeFile.computeQuotaDeltaForTruncate()}} can compute 
> the storage type quotas as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536714#comment-14536714
 ] 

Hudson commented on HDFS-7559:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-7559. Create unit test to automatically compare HDFS related classes and 
hdfs-default.xml. (Ray Chiang via asuresh) (Arun Suresh: rev 
3cefc02af73faa12a6edce904b98ba543167bec5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6757) Simplify lease manager with INodeID

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536683#comment-14536683
 ] 

Hudson commented on HDFS-6757:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-6757. Simplify lease manager with INodeID. Contributed by Haohui Mai. 
(wheat9: rev 00fe1ed3a4b3ee35fe24be257ec36445d2f44d63)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestLeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
Add missing entry in CHANGES.txt for HDFS-6757. (wheat9: rev 
3becc3af8382caed2c3bf941f8fed6daf6e7bc26)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Simplify lease manager with INodeID
> ---
>
> Key: HDFS-6757
> URL: https://issues.apache.org/jira/browse/HDFS-6757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HDFS-6757.000.patch, HDFS-6757.001.patch, 
> HDFS-6757.002.patch, HDFS-6757.003.patch, HDFS-6757.004.patch, 
> HDFS-6757.005.patch, HDFS-6757.006.patch, HDFS-6757.007.patch, 
> HDFS-6757.008.patch, HDFS-6757.009.patch, HDFS-6757.010.patch, 
> HDFS-6757.011.patch, HDFS-6757.012.patch, HDFS-6757.013.patch, 
> HDFS-6757.014.patch, HDFS-6757.015.patch, HDFS-6757.016.patch, 
> HDFS-6757.017.patch
>
>
> Currently the lease manager records leases based on path instead of inode 
> ids. Therefore, the lease manager needs to carefully keep track of the path 
> of active leases during renames and deletes. This can be a non-trivial task.
> This jira proposes to simplify the logic by tracking leases using inodeids 
> instead of paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8245) Standby namenode doesn't process DELETED_BLOCK if the add block request is in edit log.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536687#comment-14536687
 ] 

Hudson commented on HDFS-8245:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock 
request is in edit log. Contributed by Rushabh S Shah. (kihwal: rev 
2d4ae3d18bc530fa9f81ee616db8af3395705fb9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Standby namenode doesn't process DELETED_BLOCK if the add block request is in 
> edit log.
> ---
>
> Key: HDFS-8245
> URL: https://issues.apache.org/jira/browse/HDFS-8245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-8245-1.patch, HDFS-8245.patch
>
>
> The following series of events happened on Standby namenode :
> 2015-04-09 07:47:21,735 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode Active Namenode (ANN)
> 2015-04-09 07:58:01,858 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode ANN
> The following series of events happened on Active Namenode:,
> 2015-04-09 07:47:21,747 \[IPC Server handler 99 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from Standby NN (SNN)
> 2015-04-09 07:58:01,868 \[IPC Server handler 18 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from SNN
> The following series of events happened on datanode ( {color:red} datanodeA 
> {color}):
> 2015-04-09 07:52:15,817 \[DataXceiver for client 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1 at /:51078 
> \[Receiving block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Receiving 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 src: 
> /client:51078 dest: /{color:red}datanodeA:1004{color}
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO DataNode.clienttrace: src: 
> /client:51078, dest: /{color:red}datanodeA:1004{color}, bytes: 20, op: 
> HDFS_WRITE, cliID: 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1, offset: 0, 
> srvID: 356a8a98-826f-446d-8f4c-ce288c1f0a75, blockid: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, duration: 
> 148948385
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO datanode.DataNode: PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-04-09 07:52:25,970 \[DataXceiver for client /{color:red} 
> {color}:52827 \[Copying block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Copied 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 to 
> <{color:red}datanodeB{color}>:52827
> 2015-04-09 07:52:28,187 \[DataNode:   heartbeating to ANN:8020] INFO 
> impl.FsDatasetAsyncDiskService: Scheduling blk_1570321882_1102029183867 file 
> /blk_1570321882 for deletion
> 2015-04-09 07:52:28,188 \[Async disk worker #1482 for volume ] INFO 
> impl.FsDatasetAsyncDiskService: Deleted BP-595383232--1360869396230 
> blk_1570321882_1102029183867 file /blk_1570321882
> Then we failover for upgrade and then the standby became active.
> When we did  ls command on this file, we got the following exception:
> 15/04/09 22:07:39 WARN hdfs.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error for OP_READ_BLOCK, self=/client:32947, 
> remote={color:red}datanodeA:1004{color}, for file , for pool 
> BP-595383232--1360869396230 block 1570321882_1102029183867
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:445)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:410)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:815)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory

[jira] [Commented] (HDFS-8097) TestFileTruncate is failing intermittently

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536682#comment-14536682
 ] 

Hudson commented on HDFS-8097:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-8097. TestFileTruncate is failing intermittently. (Contributed by Rakesh 
R) (arp: rev 59995cec4ad9efcef7d4641375ca3eb40e2429ef)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestFileTruncate is failing intermittently
> --
>
> Key: HDFS-8097
> URL: https://issues.apache.org/jira/browse/HDFS-8097
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8097-001.patch, HDFS-8097-002.patch, 
> HDFS-8097-003.patch
>
>
> {code}
> java.lang.AssertionError: Bad disk space usage expected:<45> but was:<12>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5640) Add snapshot methods to FileContext.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536679#comment-14536679
 ] 

Hudson commented on HDFS-5640:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
HDFS-5640. Add snapshot methods to FileContext. Contributed by Rakesh R. 
(cnauroth: rev 26f61d41df9e90a5053d9265f535cc492196f2a5)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileContextSnapshot.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java


> Add snapshot methods to FileContext.
> 
>
> Key: HDFS-5640
> URL: https://issues.apache.org/jira/browse/HDFS-5640
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Chris Nauroth
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-5640-001.patch, HDFS-5640-002.patch, 
> HDFS-5640-003.patch, HDFS-5640-004.patch, HDFS-5640-005.patch, 
> HDFS-5640-007.patch, HDFS-5640-007.patch
>
>
> Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
> For feature parity, these methods need to be added to {{FileContext}}.  This 
> would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8362) Java Compilation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536678#comment-14536678
 ] 

Hadoop QA commented on HDFS-8362:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 11s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 31s | The applied patch generated  
75 new checkstyle issues (total was 1679, now 1681). |
| {color:green}+1{color} | whitespace |   0m  7s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 55s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   1m 19s | Pre-build of native portion |
| {color:green}+1{color} | mapreduce tests |   9m 16s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:red}-1{color} | hdfs tests | 163m 55s | Tests failed in hadoop-hdfs. |
| | | 196m 27s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.TestDatanodeDeath |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731708/HDFS-8362-1.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 02a4a22 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10902/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10902/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10902/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10902/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10902/console |


This message was automatically generated.

> Java Compilation Error in TestHdfsConfigFields.java and 
> TestMapreduceConfigFields.java
> --
>
> Key: HDFS-8362
> URL: https://issues.apache.org/jira/browse/HDFS-8362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0
>
> Attachments: HDFS-8362-1.patch
>
>
> in TestHdfsConfigFields.java failure is because of wrong package name.
> in TestMapreduceConfigFields.java failure i s becuase of:
> i) wrong package name
> ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8327) Simplify quota calculations for snapshots and truncate

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536663#comment-14536663
 ] 

Hudson commented on HDFS-8327:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8327. Compute storage type quotas in 
INodeFile.computeQuotaDeltaForTruncate(). Contributed by Haohui Mai. (wheat9: 
rev 02a4a22b9c0e22c2e7dd6ec85edd5c5a167fe19f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java


> Simplify quota calculations for snapshots and truncate
> --
>
> Key: HDFS-8327
> URL: https://issues.apache.org/jira/browse/HDFS-8327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8327.000.patch, HDFS-8327.001.patch, 
> HDFS-8327.002.patch, HDFS-8327.003.patch, HDFS-8327.004.patch
>
>
> To simplify the code {{INodeFile.computeQuotaDeltaForTruncate()}} can compute 
> the storage type quotas as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7433) Optimize performance of DatanodeManager's node map

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536641#comment-14536641
 ] 

Hudson commented on HDFS-7433:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-7433. Optimize performance of DatanodeManager's node map. Contributed by 
Daryn Sharp. (kihwal: rev 7a7960be41c32f20ffec9fea811878b113da62db)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Optimize performance of DatanodeManager's node map
> --
>
> Key: HDFS-7433
> URL: https://issues.apache.org/jira/browse/HDFS-7433
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7433.patch, HDFS-7433.patch, HDFS-7433.patch, 
> HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8274) NFS configuration nfs.dump.dir not working

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536659#comment-14536659
 ] 

Hudson commented on HDFS-8274:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8274. NFS configuration nfs.dump.dir not working (Contributed by Ajith S) 
(arp: rev cd6b26cce7457d08346b9d90a5f2f333ba4202d8)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NFS configuration nfs.dump.dir not working
> --
>
> Key: HDFS-8274
> URL: https://issues.apache.org/jira/browse/HDFS-8274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8274.patch
>
>
> As per the document 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> we can configure
> {quote} 
> nfs.dump.dir
> {quote}
> as nfs file dump directory, but using this configuration in *hdfs-site.xml* 
> doesn't work and when nfs gateway is started, default location is used i.e 
> \tmp\.hdfs-nfs
> The reason being the key expected in *NfsConfigKeys.java*
> {code}
> public static final String DFS_NFS_FILE_DUMP_DIR_KEY = "nfs.file.dump.dir";
> {code}
> we can change it to *nfs.dump.dir* instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536667#comment-14536667
 ] 

Hudson commented on HDFS-7559:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-7559. Create unit test to automatically compare HDFS related classes and 
hdfs-default.xml. (Ray Chiang via asuresh) (Arun Suresh: rev 
3cefc02af73faa12a6edce904b98ba543167bec5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536661#comment-14536661
 ] 

Hudson commented on HDFS-8346:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8346. libwebhdfs build fails during link due to unresolved external 
symbols. Contributed by Chris Nauroth. (wheat9: rev 
f4ebbc6afc1297dced54bd2bd671e587c4ceb2fc)
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.h
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.c
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8097) TestFileTruncate is failing intermittently

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536635#comment-14536635
 ] 

Hudson commented on HDFS-8097:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8097. TestFileTruncate is failing intermittently. (Contributed by Rakesh 
R) (arp: rev 59995cec4ad9efcef7d4641375ca3eb40e2429ef)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestFileTruncate is failing intermittently
> --
>
> Key: HDFS-8097
> URL: https://issues.apache.org/jira/browse/HDFS-8097
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8097-001.patch, HDFS-8097-002.patch, 
> HDFS-8097-003.patch
>
>
> {code}
> java.lang.AssertionError: Bad disk space usage expected:<45> but was:<12>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8340) Fix NFS documentation of nfs.wtmax

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536645#comment-14536645
 ] 

Hudson commented on HDFS-8340:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8340. Fix NFS documentation of nfs.wtmax. (Contributed by Ajith S) (arp: 
rev a2d40bced9f793c8c4193f1447425ca7f3f8f357)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix NFS documentation of nfs.wtmax
> --
>
> Key: HDFS-8340
> URL: https://issues.apache.org/jira/browse/HDFS-8340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Minor
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8340.patch
>
>
> According to documentation
> http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> bq. For larger data transfer size, one needs to update “nfs.rtmax” and 
> “nfs.rtmax” in hdfs-site.xml.
> nfs.rtmax is mentioned twice, instead it should be  “nfs.rtmax” and 
> “nfs.wtmax” 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8284) Update documentation about how to use HTrace with HDFS

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536643#comment-14536643
 ] 

Hudson commented on HDFS-8284:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8284. Update documentation about how to use HTrace with HDFS (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
8f7c2364d7254a1d987b095ba442bf20727796f8)
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update documentation about how to use HTrace with HDFS
> --
>
> Key: HDFS-8284
> URL: https://issues.apache.org/jira/browse/HDFS-8284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8284.001.patch, HDFS-8284.002.patch, 
> HDFS-8284.003.patch
>
>
> Tracing originated in DFSClient uses configuration keys prefixed with 
> "dfs.client.htrace" after HDFS-8213. Server side tracing uses conf keys 
> prefixed with "dfs.htrace".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8113) Add check for null BlockCollection pointers in BlockInfoContiguous structures

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536675#comment-14536675
 ] 

Hudson commented on HDFS-8113:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8113. Add check for null BlockCollection pointers in BlockInfoContiguous 
structures (Chengbing Liu via Colin P. McCabe) (cmccabe: rev 
f523e963e4d88e4e459352387c6efeab59e7a809)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add check for null BlockCollection pointers in BlockInfoContiguous structures
> -
>
> Key: HDFS-8113
> URL: https://issues.apache.org/jira/browse/HDFS-8113
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8113.02.patch, HDFS-8113.patch
>
>
> The following copy constructor can throw NullPointerException if {{bc}} is 
> null.
> {code}
>   protected BlockInfoContiguous(BlockInfoContiguous from) {
> this(from, from.bc.getBlockReplication());
> this.bc = from.bc;
>   }
> {code}
> We have observed that some DataNodes keeps failing doing block reports with 
> NameNode. The stacktrace is as follows. Though we are not using the latest 
> version, the problem still exists.
> {quote}
> 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> RemoteException in offerService
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
> at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536664#comment-14536664
 ] 

Hudson commented on HDFS-8311:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8311. DataStreamer.transfer() should timeout the socket InputStream. 
(Esteban Gutierrez via Yongjun Zhang) (yzhang: rev 
730f9930a48259f34e48404aee51e8d641cc3d36)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> DataStreamer.transfer() should timeout the socket InputStream.
> --
>
> Key: HDFS-8311
> URL: https://issues.apache.org/jira/browse/HDFS-8311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.8.0
>
> Attachments: 
> 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, 
> HDFS-8311.001.patch
>
>
> While validating some HA failure modes we found that HDFS clients can take a 
> long time to recover or sometimes don't recover at all since we don't setup 
> the socket timeout in the InputStream:
> {code}
> private void transfer () { ...
> ...
>  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
>  InputStream unbufIn = NetUtils.getInputStream(sock);
> ...
> }
> {code}
> The InputStream should have its own timeout in the same way as the 
> OutputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8326) Documentation about when checkpoints are run is out of date

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1453#comment-1453
 ] 

Hudson commented on HDFS-8326:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8326. Documentation about when checkpoints are run is out of date. (Misty 
Stanley-Jones via xyao) (xyao: rev d0e75e60fb16ffd6c95648a06ff3958722f71e4d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Documentation about when checkpoints are run is out of date
> ---
>
> Key: HDFS-8326
> URL: https://issues.apache.org/jira/browse/HDFS-8326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.8.0
>
> Attachments: HDFS-8326.001.patch, HDFS-8326.002.patch, 
> HDFS-8326.003.patch, HDFS-8326.004.patch, HDFS-8326.patch
>
>
> Apparently checkpointing by interval or transaction size are both supported 
> in at least HDFS 2.3, but the documentation does not reflect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5640) Add snapshot methods to FileContext.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536632#comment-14536632
 ] 

Hudson commented on HDFS-5640:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-5640. Add snapshot methods to FileContext. Contributed by Rakesh R. 
(cnauroth: rev 26f61d41df9e90a5053d9265f535cc492196f2a5)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileContextSnapshot.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java


> Add snapshot methods to FileContext.
> 
>
> Key: HDFS-5640
> URL: https://issues.apache.org/jira/browse/HDFS-5640
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Chris Nauroth
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-5640-001.patch, HDFS-5640-002.patch, 
> HDFS-5640-003.patch, HDFS-5640-004.patch, HDFS-5640-005.patch, 
> HDFS-5640-007.patch, HDFS-5640-007.patch
>
>
> Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
> For feature parity, these methods need to be added to {{FileContext}}.  This 
> would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6757) Simplify lease manager with INodeID

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536636#comment-14536636
 ] 

Hudson commented on HDFS-6757:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-6757. Simplify lease manager with INodeID. Contributed by Haohui Mai. 
(wheat9: rev 00fe1ed3a4b3ee35fe24be257ec36445d2f44d63)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestLeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
Add missing entry in CHANGES.txt for HDFS-6757. (wheat9: rev 
3becc3af8382caed2c3bf941f8fed6daf6e7bc26)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Simplify lease manager with INodeID
> ---
>
> Key: HDFS-6757
> URL: https://issues.apache.org/jira/browse/HDFS-6757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HDFS-6757.000.patch, HDFS-6757.001.patch, 
> HDFS-6757.002.patch, HDFS-6757.003.patch, HDFS-6757.004.patch, 
> HDFS-6757.005.patch, HDFS-6757.006.patch, HDFS-6757.007.patch, 
> HDFS-6757.008.patch, HDFS-6757.009.patch, HDFS-6757.010.patch, 
> HDFS-6757.011.patch, HDFS-6757.012.patch, HDFS-6757.013.patch, 
> HDFS-6757.014.patch, HDFS-6757.015.patch, HDFS-6757.016.patch, 
> HDFS-6757.017.patch
>
>
> Currently the lease manager records leases based on path instead of inode 
> ids. Therefore, the lease manager needs to carefully keep track of the path 
> of active leases during renames and deletes. This can be a non-trivial task.
> This jira proposes to simplify the logic by tracking leases using inodeids 
> instead of paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8245) Standby namenode doesn't process DELETED_BLOCK if the add block request is in edit log.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536640#comment-14536640
 ] 

Hudson commented on HDFS-8245:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/190/])
HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock 
request is in edit log. Contributed by Rushabh S Shah. (kihwal: rev 
2d4ae3d18bc530fa9f81ee616db8af3395705fb9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Standby namenode doesn't process DELETED_BLOCK if the add block request is in 
> edit log.
> ---
>
> Key: HDFS-8245
> URL: https://issues.apache.org/jira/browse/HDFS-8245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-8245-1.patch, HDFS-8245.patch
>
>
> The following series of events happened on Standby namenode :
> 2015-04-09 07:47:21,735 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode Active Namenode (ANN)
> 2015-04-09 07:58:01,858 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode ANN
> The following series of events happened on Active Namenode:,
> 2015-04-09 07:47:21,747 \[IPC Server handler 99 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from Standby NN (SNN)
> 2015-04-09 07:58:01,868 \[IPC Server handler 18 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from SNN
> The following series of events happened on datanode ( {color:red} datanodeA 
> {color}):
> 2015-04-09 07:52:15,817 \[DataXceiver for client 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1 at /:51078 
> \[Receiving block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Receiving 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 src: 
> /client:51078 dest: /{color:red}datanodeA:1004{color}
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO DataNode.clienttrace: src: 
> /client:51078, dest: /{color:red}datanodeA:1004{color}, bytes: 20, op: 
> HDFS_WRITE, cliID: 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1, offset: 0, 
> srvID: 356a8a98-826f-446d-8f4c-ce288c1f0a75, blockid: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, duration: 
> 148948385
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO datanode.DataNode: PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-04-09 07:52:25,970 \[DataXceiver for client /{color:red} 
> {color}:52827 \[Copying block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Copied 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 to 
> <{color:red}datanodeB{color}>:52827
> 2015-04-09 07:52:28,187 \[DataNode:   heartbeating to ANN:8020] INFO 
> impl.FsDatasetAsyncDiskService: Scheduling blk_1570321882_1102029183867 file 
> /blk_1570321882 for deletion
> 2015-04-09 07:52:28,188 \[Async disk worker #1482 for volume ] INFO 
> impl.FsDatasetAsyncDiskService: Deleted BP-595383232--1360869396230 
> blk_1570321882_1102029183867 file /blk_1570321882
> Then we failover for upgrade and then the standby became active.
> When we did  ls command on this file, we got the following exception:
> 15/04/09 22:07:39 WARN hdfs.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error for OP_READ_BLOCK, self=/client:32947, 
> remote={color:red}datanodeA:1004{color}, for file , for pool 
> BP-595383232--1360869396230 block 1570321882_1102029183867
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:445)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:410)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:815)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693)
> at 
> org.apache.hadoop.hdfs.BlockRea

[jira] [Commented] (HDFS-8326) Documentation about when checkpoints are run is out of date

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536606#comment-14536606
 ] 

Hudson commented on HDFS-8326:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8326. Documentation about when checkpoints are run is out of date. (Misty 
Stanley-Jones via xyao) (xyao: rev d0e75e60fb16ffd6c95648a06ff3958722f71e4d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Documentation about when checkpoints are run is out of date
> ---
>
> Key: HDFS-8326
> URL: https://issues.apache.org/jira/browse/HDFS-8326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.8.0
>
> Attachments: HDFS-8326.001.patch, HDFS-8326.002.patch, 
> HDFS-8326.003.patch, HDFS-8326.004.patch, HDFS-8326.patch
>
>
> Apparently checkpointing by interval or transaction size are both supported 
> in at least HDFS 2.3, but the documentation does not reflect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536604#comment-14536604
 ] 

Hudson commented on HDFS-8311:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8311. DataStreamer.transfer() should timeout the socket InputStream. 
(Esteban Gutierrez via Yongjun Zhang) (yzhang: rev 
730f9930a48259f34e48404aee51e8d641cc3d36)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DataStreamer.transfer() should timeout the socket InputStream.
> --
>
> Key: HDFS-8311
> URL: https://issues.apache.org/jira/browse/HDFS-8311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.8.0
>
> Attachments: 
> 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, 
> HDFS-8311.001.patch
>
>
> While validating some HA failure modes we found that HDFS clients can take a 
> long time to recover or sometimes don't recover at all since we don't setup 
> the socket timeout in the InputStream:
> {code}
> private void transfer () { ...
> ...
>  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
>  InputStream unbufIn = NetUtils.getInputStream(sock);
> ...
> }
> {code}
> The InputStream should have its own timeout in the same way as the 
> OutputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8274) NFS configuration nfs.dump.dir not working

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536600#comment-14536600
 ] 

Hudson commented on HDFS-8274:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8274. NFS configuration nfs.dump.dir not working (Contributed by Ajith S) 
(arp: rev cd6b26cce7457d08346b9d90a5f2f333ba4202d8)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java


> NFS configuration nfs.dump.dir not working
> --
>
> Key: HDFS-8274
> URL: https://issues.apache.org/jira/browse/HDFS-8274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8274.patch
>
>
> As per the document 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> we can configure
> {quote} 
> nfs.dump.dir
> {quote}
> as nfs file dump directory, but using this configuration in *hdfs-site.xml* 
> doesn't work and when nfs gateway is started, default location is used i.e 
> \tmp\.hdfs-nfs
> The reason being the key expected in *NfsConfigKeys.java*
> {code}
> public static final String DFS_NFS_FILE_DUMP_DIR_KEY = "nfs.file.dump.dir";
> {code}
> we can change it to *nfs.dump.dir* instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536602#comment-14536602
 ] 

Hudson commented on HDFS-8346:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8346. libwebhdfs build fails during link due to unresolved external 
symbols. Contributed by Chris Nauroth. (wheat9: rev 
f4ebbc6afc1297dced54bd2bd671e587c4ceb2fc)
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.c
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.h


> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7894) Rolling upgrade readiness is not updated in jmx until query command is issued.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536597#comment-14536597
 ] 

Hudson commented on HDFS-7894:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-7894. Rolling upgrade readiness is not updated in jmx until query command 
is issued. Contributed by Brahma Reddy Battula. (kihwal: rev 
6f622672b62aa8d719060063ef0e47480cdc8655)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Rolling upgrade readiness is not updated in jmx until query command is issued.
> --
>
> Key: HDFS-7894
> URL: https://issues.apache.org/jira/browse/HDFS-7894
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Brahma Reddy Battula
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-7894-002.patch, HDFS-7894-003.patch, HDFS-7894.patch
>
>
> When a hdfs rolling upgrade is started and a rollback image is 
> created/uploaded, the active NN does not update its {{rollingUpgradeInfo}} 
> until it receives a query command via RPC. This results in inconsistent info 
> being showing up in the web UI and its jmx page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536607#comment-14536607
 ] 

Hudson commented on HDFS-7559:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-7559. Create unit test to automatically compare HDFS related classes and 
hdfs-default.xml. (Ray Chiang via asuresh) (Arun Suresh: rev 
3cefc02af73faa12a6edce904b98ba543167bec5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8327) Simplify quota calculations for snapshots and truncate

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536603#comment-14536603
 ] 

Hudson commented on HDFS-8327:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8327. Compute storage type quotas in 
INodeFile.computeQuotaDeltaForTruncate(). Contributed by Haohui Mai. (wheat9: 
rev 02a4a22b9c0e22c2e7dd6ec85edd5c5a167fe19f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


> Simplify quota calculations for snapshots and truncate
> --
>
> Key: HDFS-8327
> URL: https://issues.apache.org/jira/browse/HDFS-8327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8327.000.patch, HDFS-8327.001.patch, 
> HDFS-8327.002.patch, HDFS-8327.003.patch, HDFS-8327.004.patch
>
>
> To simplify the code {{INodeFile.computeQuotaDeltaForTruncate()}} can compute 
> the storage type quotas as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8097) TestFileTruncate is failing intermittently

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536573#comment-14536573
 ] 

Hudson commented on HDFS-8097:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8097. TestFileTruncate is failing intermittently. (Contributed by Rakesh 
R) (arp: rev 59995cec4ad9efcef7d4641375ca3eb40e2429ef)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestFileTruncate is failing intermittently
> --
>
> Key: HDFS-8097
> URL: https://issues.apache.org/jira/browse/HDFS-8097
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8097-001.patch, HDFS-8097-002.patch, 
> HDFS-8097-003.patch
>
>
> {code}
> java.lang.AssertionError: Bad disk space usage expected:<45> but was:<12>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8113) Add check for null BlockCollection pointers in BlockInfoContiguous structures

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536615#comment-14536615
 ] 

Hudson commented on HDFS-8113:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8113. Add check for null BlockCollection pointers in BlockInfoContiguous 
structures (Chengbing Liu via Colin P. McCabe) (cmccabe: rev 
f523e963e4d88e4e459352387c6efeab59e7a809)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java


> Add check for null BlockCollection pointers in BlockInfoContiguous structures
> -
>
> Key: HDFS-8113
> URL: https://issues.apache.org/jira/browse/HDFS-8113
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8113.02.patch, HDFS-8113.patch
>
>
> The following copy constructor can throw NullPointerException if {{bc}} is 
> null.
> {code}
>   protected BlockInfoContiguous(BlockInfoContiguous from) {
> this(from, from.bc.getBlockReplication());
> this.bc = from.bc;
>   }
> {code}
> We have observed that some DataNodes keeps failing doing block reports with 
> NameNode. The stacktrace is as follows. Though we are not using the latest 
> version, the problem still exists.
> {quote}
> 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> RemoteException in offerService
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
> at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7433) Optimize performance of DatanodeManager's node map

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536580#comment-14536580
 ] 

Hudson commented on HDFS-7433:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-7433. Optimize performance of DatanodeManager's node map. Contributed by 
Daryn Sharp. (kihwal: rev 7a7960be41c32f20ffec9fea811878b113da62db)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Optimize performance of DatanodeManager's node map
> --
>
> Key: HDFS-7433
> URL: https://issues.apache.org/jira/browse/HDFS-7433
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7433.patch, HDFS-7433.patch, HDFS-7433.patch, 
> HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5640) Add snapshot methods to FileContext.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536570#comment-14536570
 ] 

Hudson commented on HDFS-5640:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-5640. Add snapshot methods to FileContext. Contributed by Rakesh R. 
(cnauroth: rev 26f61d41df9e90a5053d9265f535cc492196f2a5)
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileContextSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add snapshot methods to FileContext.
> 
>
> Key: HDFS-5640
> URL: https://issues.apache.org/jira/browse/HDFS-5640
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Chris Nauroth
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-5640-001.patch, HDFS-5640-002.patch, 
> HDFS-5640-003.patch, HDFS-5640-004.patch, HDFS-5640-005.patch, 
> HDFS-5640-007.patch, HDFS-5640-007.patch
>
>
> Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
> For feature parity, these methods need to be added to {{FileContext}}.  This 
> would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8284) Update documentation about how to use HTrace with HDFS

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536582#comment-14536582
 ] 

Hudson commented on HDFS-8284:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8284. Update documentation about how to use HTrace with HDFS (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
8f7c2364d7254a1d987b095ba442bf20727796f8)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Update documentation about how to use HTrace with HDFS
> --
>
> Key: HDFS-8284
> URL: https://issues.apache.org/jira/browse/HDFS-8284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8284.001.patch, HDFS-8284.002.patch, 
> HDFS-8284.003.patch
>
>
> Tracing originated in DFSClient uses configuration keys prefixed with 
> "dfs.client.htrace" after HDFS-8213. Server side tracing uses conf keys 
> prefixed with "dfs.htrace".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6757) Simplify lease manager with INodeID

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536574#comment-14536574
 ] 

Hudson commented on HDFS-6757:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-6757. Simplify lease manager with INodeID. Contributed by Haohui Mai. 
(wheat9: rev 00fe1ed3a4b3ee35fe24be257ec36445d2f44d63)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestLeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
Add missing entry in CHANGES.txt for HDFS-6757. (wheat9: rev 
3becc3af8382caed2c3bf941f8fed6daf6e7bc26)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Simplify lease manager with INodeID
> ---
>
> Key: HDFS-6757
> URL: https://issues.apache.org/jira/browse/HDFS-6757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HDFS-6757.000.patch, HDFS-6757.001.patch, 
> HDFS-6757.002.patch, HDFS-6757.003.patch, HDFS-6757.004.patch, 
> HDFS-6757.005.patch, HDFS-6757.006.patch, HDFS-6757.007.patch, 
> HDFS-6757.008.patch, HDFS-6757.009.patch, HDFS-6757.010.patch, 
> HDFS-6757.011.patch, HDFS-6757.012.patch, HDFS-6757.013.patch, 
> HDFS-6757.014.patch, HDFS-6757.015.patch, HDFS-6757.016.patch, 
> HDFS-6757.017.patch
>
>
> Currently the lease manager records leases based on path instead of inode 
> ids. Therefore, the lease manager needs to carefully keep track of the path 
> of active leases during renames and deletes. This can be a non-trivial task.
> This jira proposes to simplify the logic by tracking leases using inodeids 
> instead of paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8340) Fix NFS documentation of nfs.wtmax

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536584#comment-14536584
 ] 

Hudson commented on HDFS-8340:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8340. Fix NFS documentation of nfs.wtmax. (Contributed by Ajith S) (arp: 
rev a2d40bced9f793c8c4193f1447425ca7f3f8f357)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix NFS documentation of nfs.wtmax
> --
>
> Key: HDFS-8340
> URL: https://issues.apache.org/jira/browse/HDFS-8340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Minor
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8340.patch
>
>
> According to documentation
> http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> bq. For larger data transfer size, one needs to update “nfs.rtmax” and 
> “nfs.rtmax” in hdfs-site.xml.
> nfs.rtmax is mentioned twice, instead it should be  “nfs.rtmax” and 
> “nfs.wtmax” 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8245) Standby namenode doesn't process DELETED_BLOCK if the add block request is in edit log.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536578#comment-14536578
 ] 

Hudson commented on HDFS-8245:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock 
request is in edit log. Contributed by Rushabh S Shah. (kihwal: rev 
2d4ae3d18bc530fa9f81ee616db8af3395705fb9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Standby namenode doesn't process DELETED_BLOCK if the add block request is in 
> edit log.
> ---
>
> Key: HDFS-8245
> URL: https://issues.apache.org/jira/browse/HDFS-8245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-8245-1.patch, HDFS-8245.patch
>
>
> The following series of events happened on Standby namenode :
> 2015-04-09 07:47:21,735 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode Active Namenode (ANN)
> 2015-04-09 07:58:01,858 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode ANN
> The following series of events happened on Active Namenode:,
> 2015-04-09 07:47:21,747 \[IPC Server handler 99 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from Standby NN (SNN)
> 2015-04-09 07:58:01,868 \[IPC Server handler 18 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from SNN
> The following series of events happened on datanode ( {color:red} datanodeA 
> {color}):
> 2015-04-09 07:52:15,817 \[DataXceiver for client 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1 at /:51078 
> \[Receiving block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Receiving 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 src: 
> /client:51078 dest: /{color:red}datanodeA:1004{color}
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO DataNode.clienttrace: src: 
> /client:51078, dest: /{color:red}datanodeA:1004{color}, bytes: 20, op: 
> HDFS_WRITE, cliID: 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1, offset: 0, 
> srvID: 356a8a98-826f-446d-8f4c-ce288c1f0a75, blockid: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, duration: 
> 148948385
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO datanode.DataNode: PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-04-09 07:52:25,970 \[DataXceiver for client /{color:red} 
> {color}:52827 \[Copying block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Copied 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 to 
> <{color:red}datanodeB{color}>:52827
> 2015-04-09 07:52:28,187 \[DataNode:   heartbeating to ANN:8020] INFO 
> impl.FsDatasetAsyncDiskService: Scheduling blk_1570321882_1102029183867 file 
> /blk_1570321882 for deletion
> 2015-04-09 07:52:28,188 \[Async disk worker #1482 for volume ] INFO 
> impl.FsDatasetAsyncDiskService: Deleted BP-595383232--1360869396230 
> blk_1570321882_1102029183867 file /blk_1570321882
> Then we failover for upgrade and then the standby became active.
> When we did  ls command on this file, we got the following exception:
> 15/04/09 22:07:39 WARN hdfs.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error for OP_READ_BLOCK, self=/client:32947, 
> remote={color:red}datanodeA:1004{color}, for file , for pool 
> BP-595383232--1360869396230 block 1570321882_1102029183867
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:445)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:410)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:815)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory

[jira] [Commented] (HDFS-8256) "-storagepolicies , -blockId ,-replicaDetails " options are missed out in usage and from documentation

2015-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536568#comment-14536568
 ] 

Hadoop QA commented on HDFS-8256:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 34s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 32s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 58s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 19s | The applied patch generated  
608 new checkstyle issues (total was 1046, now 1041). |
| {color:red}-1{color} | whitespace |   0m  4s | The patch has 34  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 165m 12s | Tests failed in hadoop-hdfs. |
| | | 213m 55s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.tools.TestHdfsConfigFields |
|   | hadoop.tracing.TestTraceAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731700/HDFS-8256.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 02a4a22 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10900/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10900/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10900/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10900/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10900/console |


This message was automatically generated.

> "-storagepolicies , -blockId ,-replicaDetails " options are missed out in 
> usage and from documentation
> --
>
> Key: HDFS-8256
> URL: https://issues.apache.org/jira/browse/HDFS-8256
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: J.Andreina
>Assignee: J.Andreina
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8256.2.patch, HDFS-8256.3.patch, 
> HDFS-8256_Trunk.1.patch
>
>
> "-storagepolicies , -blockId ,-replicaDetails " options are missed out in 
> usage and from documentation.
> {noformat}
> Usage: hdfs fsck  [-list-corruptfileblocks | [-move | -delete | 
> -openforwrite] [-files [-blocks [-locations | -racks [-includeSnapshots] 
> [-showprogress]
> {noformat}
> Found as part of HDFS-8108.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536553#comment-14536553
 ] 

Hudson commented on HDFS-8311:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8311. DataStreamer.transfer() should timeout the socket InputStream. 
(Esteban Gutierrez via Yongjun Zhang) (yzhang: rev 
730f9930a48259f34e48404aee51e8d641cc3d36)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DataStreamer.transfer() should timeout the socket InputStream.
> --
>
> Key: HDFS-8311
> URL: https://issues.apache.org/jira/browse/HDFS-8311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.8.0
>
> Attachments: 
> 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, 
> HDFS-8311.001.patch
>
>
> While validating some HA failure modes we found that HDFS clients can take a 
> long time to recover or sometimes don't recover at all since we don't setup 
> the socket timeout in the InputStream:
> {code}
> private void transfer () { ...
> ...
>  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
>  InputStream unbufIn = NetUtils.getInputStream(sock);
> ...
> }
> {code}
> The InputStream should have its own timeout in the same way as the 
> OutputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8274) NFS configuration nfs.dump.dir not working

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536549#comment-14536549
 ] 

Hudson commented on HDFS-8274:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8274. NFS configuration nfs.dump.dir not working (Contributed by Ajith S) 
(arp: rev cd6b26cce7457d08346b9d90a5f2f333ba4202d8)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NFS configuration nfs.dump.dir not working
> --
>
> Key: HDFS-8274
> URL: https://issues.apache.org/jira/browse/HDFS-8274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8274.patch
>
>
> As per the document 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> we can configure
> {quote} 
> nfs.dump.dir
> {quote}
> as nfs file dump directory, but using this configuration in *hdfs-site.xml* 
> doesn't work and when nfs gateway is started, default location is used i.e 
> \tmp\.hdfs-nfs
> The reason being the key expected in *NfsConfigKeys.java*
> {code}
> public static final String DFS_NFS_FILE_DUMP_DIR_KEY = "nfs.file.dump.dir";
> {code}
> we can change it to *nfs.dump.dir* instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536556#comment-14536556
 ] 

Hudson commented on HDFS-7559:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-7559. Create unit test to automatically compare HDFS related classes and 
hdfs-default.xml. (Ray Chiang via asuresh) (Arun Suresh: rev 
3cefc02af73faa12a6edce904b98ba543167bec5)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java


> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536551#comment-14536551
 ] 

Hudson commented on HDFS-8346:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8346. libwebhdfs build fails during link due to unresolved external 
symbols. Contributed by Chris Nauroth. (wheat9: rev 
f4ebbc6afc1297dced54bd2bd671e587c4ceb2fc)
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.h
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.c
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/CMakeLists.txt


> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8113) Add check for null BlockCollection pointers in BlockInfoContiguous structures

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536564#comment-14536564
 ] 

Hudson commented on HDFS-8113:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8113. Add check for null BlockCollection pointers in BlockInfoContiguous 
structures (Chengbing Liu via Colin P. McCabe) (cmccabe: rev 
f523e963e4d88e4e459352387c6efeab59e7a809)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java


> Add check for null BlockCollection pointers in BlockInfoContiguous structures
> -
>
> Key: HDFS-8113
> URL: https://issues.apache.org/jira/browse/HDFS-8113
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8113.02.patch, HDFS-8113.patch
>
>
> The following copy constructor can throw NullPointerException if {{bc}} is 
> null.
> {code}
>   protected BlockInfoContiguous(BlockInfoContiguous from) {
> this(from, from.bc.getBlockReplication());
> this.bc = from.bc;
>   }
> {code}
> We have observed that some DataNodes keeps failing doing block reports with 
> NameNode. The stacktrace is as follows. Though we are not using the latest 
> version, the problem still exists.
> {quote}
> 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> RemoteException in offerService
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
> at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8326) Documentation about when checkpoints are run is out of date

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536555#comment-14536555
 ] 

Hudson commented on HDFS-8326:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8326. Documentation about when checkpoints are run is out of date. (Misty 
Stanley-Jones via xyao) (xyao: rev d0e75e60fb16ffd6c95648a06ff3958722f71e4d)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Documentation about when checkpoints are run is out of date
> ---
>
> Key: HDFS-8326
> URL: https://issues.apache.org/jira/browse/HDFS-8326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.8.0
>
> Attachments: HDFS-8326.001.patch, HDFS-8326.002.patch, 
> HDFS-8326.003.patch, HDFS-8326.004.patch, HDFS-8326.patch
>
>
> Apparently checkpointing by interval or transaction size are both supported 
> in at least HDFS 2.3, but the documentation does not reflect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-219) Add md5sum facility in dfsshell

2015-05-09 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki resolved HDFS-219.
-
Resolution: Duplicate

Closing because of duplication. Please reopen if I am wrong.

> Add md5sum facility in dfsshell
> ---
>
> Key: HDFS-219
> URL: https://issues.apache.org/jira/browse/HDFS-219
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: zhangwei
>  Labels: newbie
>
> I think it would be usefull to add md5sum (or anyone else) to dfsshell ,and 
> the facility can verify the file on hdfs.It can confirm the file is integrity 
> after copyFromLocal or copyToLocal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7433) Optimize performance of DatanodeManager's node map

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536529#comment-14536529
 ] 

Hudson commented on HDFS-7433:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-7433. Optimize performance of DatanodeManager's node map. Contributed by 
Daryn Sharp. (kihwal: rev 7a7960be41c32f20ffec9fea811878b113da62db)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Optimize performance of DatanodeManager's node map
> --
>
> Key: HDFS-7433
> URL: https://issues.apache.org/jira/browse/HDFS-7433
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7433.patch, HDFS-7433.patch, HDFS-7433.patch, 
> HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8245) Standby namenode doesn't process DELETED_BLOCK if the add block request is in edit log.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536527#comment-14536527
 ] 

Hudson commented on HDFS-8245:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock 
request is in edit log. Contributed by Rushabh S Shah. (kihwal: rev 
2d4ae3d18bc530fa9f81ee616db8af3395705fb9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java


> Standby namenode doesn't process DELETED_BLOCK if the add block request is in 
> edit log.
> ---
>
> Key: HDFS-8245
> URL: https://issues.apache.org/jira/browse/HDFS-8245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-8245-1.patch, HDFS-8245.patch
>
>
> The following series of events happened on Standby namenode :
> 2015-04-09 07:47:21,735 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode Active Namenode (ANN)
> 2015-04-09 07:58:01,858 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode ANN
> The following series of events happened on Active Namenode:,
> 2015-04-09 07:47:21,747 \[IPC Server handler 99 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from Standby NN (SNN)
> 2015-04-09 07:58:01,868 \[IPC Server handler 18 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from SNN
> The following series of events happened on datanode ( {color:red} datanodeA 
> {color}):
> 2015-04-09 07:52:15,817 \[DataXceiver for client 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1 at /:51078 
> \[Receiving block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Receiving 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 src: 
> /client:51078 dest: /{color:red}datanodeA:1004{color}
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO DataNode.clienttrace: src: 
> /client:51078, dest: /{color:red}datanodeA:1004{color}, bytes: 20, op: 
> HDFS_WRITE, cliID: 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1, offset: 0, 
> srvID: 356a8a98-826f-446d-8f4c-ce288c1f0a75, blockid: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, duration: 
> 148948385
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO datanode.DataNode: PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-04-09 07:52:25,970 \[DataXceiver for client /{color:red} 
> {color}:52827 \[Copying block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Copied 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 to 
> <{color:red}datanodeB{color}>:52827
> 2015-04-09 07:52:28,187 \[DataNode:   heartbeating to ANN:8020] INFO 
> impl.FsDatasetAsyncDiskService: Scheduling blk_1570321882_1102029183867 file 
> /blk_1570321882 for deletion
> 2015-04-09 07:52:28,188 \[Async disk worker #1482 for volume ] INFO 
> impl.FsDatasetAsyncDiskService: Deleted BP-595383232--1360869396230 
> blk_1570321882_1102029183867 file /blk_1570321882
> Then we failover for upgrade and then the standby became active.
> When we did  ls command on this file, we got the following exception:
> 15/04/09 22:07:39 WARN hdfs.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error for OP_READ_BLOCK, self=/client:32947, 
> remote={color:red}datanodeA:1004{color}, for file , for pool 
> BP-595383232--1360869396230 block 1570321882_1102029183867
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:445)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:410)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:815)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(Blo

[jira] [Commented] (HDFS-6757) Simplify lease manager with INodeID

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536523#comment-14536523
 ] 

Hudson commented on HDFS-6757:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-6757. Simplify lease manager with INodeID. Contributed by Haohui Mai. 
(wheat9: rev 00fe1ed3a4b3ee35fe24be257ec36445d2f44d63)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestLeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
Add missing entry in CHANGES.txt for HDFS-6757. (wheat9: rev 
3becc3af8382caed2c3bf941f8fed6daf6e7bc26)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Simplify lease manager with INodeID
> ---
>
> Key: HDFS-6757
> URL: https://issues.apache.org/jira/browse/HDFS-6757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HDFS-6757.000.patch, HDFS-6757.001.patch, 
> HDFS-6757.002.patch, HDFS-6757.003.patch, HDFS-6757.004.patch, 
> HDFS-6757.005.patch, HDFS-6757.006.patch, HDFS-6757.007.patch, 
> HDFS-6757.008.patch, HDFS-6757.009.patch, HDFS-6757.010.patch, 
> HDFS-6757.011.patch, HDFS-6757.012.patch, HDFS-6757.013.patch, 
> HDFS-6757.014.patch, HDFS-6757.015.patch, HDFS-6757.016.patch, 
> HDFS-6757.017.patch
>
>
> Currently the lease manager records leases based on path instead of inode 
> ids. Therefore, the lease manager needs to carefully keep track of the path 
> of active leases during renames and deletes. This can be a non-trivial task.
> This jira proposes to simplify the logic by tracking leases using inodeids 
> instead of paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5640) Add snapshot methods to FileContext.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536519#comment-14536519
 ] 

Hudson commented on HDFS-5640:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-5640. Add snapshot methods to FileContext. Contributed by Rakesh R. 
(cnauroth: rev 26f61d41df9e90a5053d9265f535cc492196f2a5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileContextSnapshot.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java


> Add snapshot methods to FileContext.
> 
>
> Key: HDFS-5640
> URL: https://issues.apache.org/jira/browse/HDFS-5640
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Chris Nauroth
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-5640-001.patch, HDFS-5640-002.patch, 
> HDFS-5640-003.patch, HDFS-5640-004.patch, HDFS-5640-005.patch, 
> HDFS-5640-007.patch, HDFS-5640-007.patch
>
>
> Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
> For feature parity, these methods need to be added to {{FileContext}}.  This 
> would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8327) Simplify quota calculations for snapshots and truncate

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536552#comment-14536552
 ] 

Hudson commented on HDFS-8327:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8327. Compute storage type quotas in 
INodeFile.computeQuotaDeltaForTruncate(). Contributed by Haohui Mai. (wheat9: 
rev 02a4a22b9c0e22c2e7dd6ec85edd5c5a167fe19f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Simplify quota calculations for snapshots and truncate
> --
>
> Key: HDFS-8327
> URL: https://issues.apache.org/jira/browse/HDFS-8327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8327.000.patch, HDFS-8327.001.patch, 
> HDFS-8327.002.patch, HDFS-8327.003.patch, HDFS-8327.004.patch
>
>
> To simplify the code {{INodeFile.computeQuotaDeltaForTruncate()}} can compute 
> the storage type quotas as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7894) Rolling upgrade readiness is not updated in jmx until query command is issued.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536546#comment-14536546
 ] 

Hudson commented on HDFS-7894:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-7894. Rolling upgrade readiness is not updated in jmx until query command 
is issued. Contributed by Brahma Reddy Battula. (kihwal: rev 
6f622672b62aa8d719060063ef0e47480cdc8655)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Rolling upgrade readiness is not updated in jmx until query command is issued.
> --
>
> Key: HDFS-7894
> URL: https://issues.apache.org/jira/browse/HDFS-7894
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Brahma Reddy Battula
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-7894-002.patch, HDFS-7894-003.patch, HDFS-7894.patch
>
>
> When a hdfs rolling upgrade is started and a rollback image is 
> created/uploaded, the active NN does not update its {{rollingUpgradeInfo}} 
> until it receives a query command via RPC. This results in inconsistent info 
> being showing up in the web UI and its jmx page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8097) TestFileTruncate is failing intermittently

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536522#comment-14536522
 ] 

Hudson commented on HDFS-8097:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8097. TestFileTruncate is failing intermittently. (Contributed by Rakesh 
R) (arp: rev 59995cec4ad9efcef7d4641375ca3eb40e2429ef)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


> TestFileTruncate is failing intermittently
> --
>
> Key: HDFS-8097
> URL: https://issues.apache.org/jira/browse/HDFS-8097
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8097-001.patch, HDFS-8097-002.patch, 
> HDFS-8097-003.patch
>
>
> {code}
> java.lang.AssertionError: Bad disk space usage expected:<45> but was:<12>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8340) Fix NFS documentation of nfs.wtmax

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536533#comment-14536533
 ] 

Hudson commented on HDFS-8340:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8340. Fix NFS documentation of nfs.wtmax. (Contributed by Ajith S) (arp: 
rev a2d40bced9f793c8c4193f1447425ca7f3f8f357)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md


> Fix NFS documentation of nfs.wtmax
> --
>
> Key: HDFS-8340
> URL: https://issues.apache.org/jira/browse/HDFS-8340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Minor
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8340.patch
>
>
> According to documentation
> http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> bq. For larger data transfer size, one needs to update “nfs.rtmax” and 
> “nfs.rtmax” in hdfs-site.xml.
> nfs.rtmax is mentioned twice, instead it should be  “nfs.rtmax” and 
> “nfs.wtmax” 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8284) Update documentation about how to use HTrace with HDFS

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536531#comment-14536531
 ] 

Hudson commented on HDFS-8284:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2120/])
HDFS-8284. Update documentation about how to use HTrace with HDFS (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
8f7c2364d7254a1d987b095ba442bf20727796f8)
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update documentation about how to use HTrace with HDFS
> --
>
> Key: HDFS-8284
> URL: https://issues.apache.org/jira/browse/HDFS-8284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8284.001.patch, HDFS-8284.002.patch, 
> HDFS-8284.003.patch
>
>
> Tracing originated in DFSClient uses configuration keys prefixed with 
> "dfs.client.htrace" after HDFS-8213. Server side tracing uses conf keys 
> prefixed with "dfs.htrace".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4185) Add a metric for number of active leases

2015-05-09 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536498#comment-14536498
 ] 

Rakesh R commented on HDFS-4185:


Thank you [~raviprak] for your time and review comments. Could you please give 
some more insight on the following statement:

bq. So your getNumActiveLeases() is actually returning number of files open for 
write (which is >= number of active leases)
{{LeaseManager}} has the data structure {{private final Collection 
paths = new TreeSet();}}. IIUC during the write/append operation the 
client who owns the file will get a unique lease from NN and maintains by 
adding a path entry in this list. Could you please tell me in which case 
{{files open for write (which is >= number of active leases)}}, am I missing 
any case?

Here, the metric {{NumActiveLeases}} count shows the cumulative of all the 
unique file paths which will be the number of leases granted by the NN at that 
point of time. 

> Add a metric for number of active leases
> 
>
> Key: HDFS-4185
> URL: https://issues.apache.org/jira/browse/HDFS-4185
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.23.4, 2.0.2-alpha
>Reporter: Kihwal Lee
>Assignee: Rakesh R
>  Labels: BB2015-05-TBR
> Attachments: HDFS-4185-001.patch, HDFS-4185-002.patch, 
> HDFS-4185-003.patch, HDFS-4185-004.patch, HDFS-4185-005.patch
>
>
> We have seen cases of systematic open file leaks, which could have been 
> detected if we have a metric that shows number of active leases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8362) Java Compilation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-09 Thread Arshad Mohammad (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536490#comment-14536490
 ] 

Arshad Mohammad commented on HDFS-8362:
---

Compilations error is in both trunk and branch-2. So the patch should be pushed 
in both trunk and branch-2

> Java Compilation Error in TestHdfsConfigFields.java and 
> TestMapreduceConfigFields.java
> --
>
> Key: HDFS-8362
> URL: https://issues.apache.org/jira/browse/HDFS-8362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0
>
> Attachments: HDFS-8362-1.patch
>
>
> in TestHdfsConfigFields.java failure is because of wrong package name.
> in TestMapreduceConfigFields.java failure i s becuase of:
> i) wrong package name
> ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6775) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536483#comment-14536483
 ] 

Hadoop QA commented on HDFS-6775:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 44s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  5s | The applied patch generated  1 
new checkstyle issues (total was 47, now 47). |
| {color:red}-1{color} | whitespace |   0m  9s | The patch has 25  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 40s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  24m  6s | Tests passed in 
hadoop-common. |
| | |  61m 29s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731703/HDFS-6775.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 02a4a22 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10901/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10901/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10901/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10901/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10901/console |


This message was automatically generated.

> Users may see TrashPolicy if hdfs dfs -rm is run
> 
>
> Key: HDFS-6775
> URL: https://issues.apache.org/jira/browse/HDFS-6775
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
> Attachments: HDFS-6775.1.patch
>
>
> Doing 'hdfs dfs -rm file' generates an extra log message on the console:
> {code}
> 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> {code}
> This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8362) Java Compilation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-09 Thread Arshad Mohammad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arshad Mohammad updated HDFS-8362:
--
Summary: Java Compilation Error in TestHdfsConfigFields.java and 
TestMapreduceConfigFields.java  (was: Compliation Error in 
TestHdfsConfigFields.java and TestMapreduceConfigFields.java)

> Java Compilation Error in TestHdfsConfigFields.java and 
> TestMapreduceConfigFields.java
> --
>
> Key: HDFS-8362
> URL: https://issues.apache.org/jira/browse/HDFS-8362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0
>
> Attachments: HDFS-8362-1.patch
>
>
> in TestHdfsConfigFields.java failure is because of wrong package name.
> in TestMapreduceConfigFields.java failure i s becuase of:
> i) wrong package name
> ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8362) Compliation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-09 Thread Arshad Mohammad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arshad Mohammad updated HDFS-8362:
--
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> Compliation Error in TestHdfsConfigFields.java and 
> TestMapreduceConfigFields.java
> -
>
> Key: HDFS-8362
> URL: https://issues.apache.org/jira/browse/HDFS-8362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0
>
> Attachments: HDFS-8362-1.patch
>
>
> in TestHdfsConfigFields.java failure is because of wrong package name.
> in TestMapreduceConfigFields.java failure i s becuase of:
> i) wrong package name
> ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8362) Compliation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-09 Thread Arshad Mohammad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arshad Mohammad updated HDFS-8362:
--
Attachment: HDFS-8362-1.patch

> Compliation Error in TestHdfsConfigFields.java and 
> TestMapreduceConfigFields.java
> -
>
> Key: HDFS-8362
> URL: https://issues.apache.org/jira/browse/HDFS-8362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Attachments: HDFS-8362-1.patch
>
>
> in TestHdfsConfigFields.java failure is because of wrong package name.
> in TestMapreduceConfigFields.java failure i s becuase of:
> i) wrong package name
> ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8327) Simplify quota calculations for snapshots and truncate

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536459#comment-14536459
 ] 

Hudson commented on HDFS-8327:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8327. Compute storage type quotas in 
INodeFile.computeQuotaDeltaForTruncate(). Contributed by Haohui Mai. (wheat9: 
rev 02a4a22b9c0e22c2e7dd6ec85edd5c5a167fe19f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Simplify quota calculations for snapshots and truncate
> --
>
> Key: HDFS-8327
> URL: https://issues.apache.org/jira/browse/HDFS-8327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8327.000.patch, HDFS-8327.001.patch, 
> HDFS-8327.002.patch, HDFS-8327.003.patch, HDFS-8327.004.patch
>
>
> To simplify the code {{INodeFile.computeQuotaDeltaForTruncate()}} can compute 
> the storage type quotas as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3384) DataStreamer thread should be closed immediatly when failed to setup a PipelineForAppendOrRecovery

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536451#comment-14536451
 ] 

Hudson commented on HDFS-3384:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-3384. DataStreamer thread should be closed immediatly when failed to setup 
a PipelineForAppendOrRecovery (Contributed by Uma Maheswara Rao G) 
(vinayakumarb: rev c648317a68891e1c900f04b7a9c98ba40c5faddb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> DataStreamer thread should be closed immediatly when failed to setup a 
> PipelineForAppendOrRecovery
> --
>
> Key: HDFS-3384
> URL: https://issues.apache.org/jira/browse/HDFS-3384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.0.0-alpha
>Reporter: Brahma Reddy Battula
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-3384-3.patch, HDFS-3384-4.patch, HDFS-3384.patch, 
> HDFS-3384_2.patch, HDFS-3384_2.patch, HDFS-3384_2.patch
>
>
> Scenraio:
> =
> write a file
> corrupt block manually
> call append..
> {noformat}
> 2012-04-19 09:33:10,776 INFO  hdfs.DFSClient 
> (DFSOutputStream.java:createBlockOutputStream(1059)) - Exception in 
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:162)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1039)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:939)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
> 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient (DFSOutputStream.java:run(549)) 
> - DataStreamer Exception
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:510)
> 2012-04-19 09:33:10,807 WARN  hdfs.DFSClient 
> (DFSOutputStream.java:hflush(1511)) - Error while syncing
> java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
> java.io.IOException: All datanodes 10.18.40.20:50010 are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:908)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536464#comment-14536464
 ] 

Hudson commented on HDFS-7559:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-7559. Create unit test to automatically compare HDFS related classes and 
hdfs-default.xml. (Ray Chiang via asuresh) (Arun Suresh: rev 
3cefc02af73faa12a6edce904b98ba543167bec5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7894) Rolling upgrade readiness is not updated in jmx until query command is issued.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536453#comment-14536453
 ] 

Hudson commented on HDFS-7894:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-7894. Rolling upgrade readiness is not updated in jmx until query command 
is issued. Contributed by Brahma Reddy Battula. (kihwal: rev 
6f622672b62aa8d719060063ef0e47480cdc8655)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Rolling upgrade readiness is not updated in jmx until query command is issued.
> --
>
> Key: HDFS-7894
> URL: https://issues.apache.org/jira/browse/HDFS-7894
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Brahma Reddy Battula
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-7894-002.patch, HDFS-7894-003.patch, HDFS-7894.patch
>
>
> When a hdfs rolling upgrade is started and a rollback image is 
> created/uploaded, the active NN does not update its {{rollingUpgradeInfo}} 
> until it receives a query command via RPC. This results in inconsistent info 
> being showing up in the web UI and its jmx page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8362) Compliation Error in TestHdfsConfigFields.java and TestMapreduceConfigFields.java

2015-05-09 Thread Arshad Mohammad (JIRA)
Arshad Mohammad created HDFS-8362:
-

 Summary: Compliation Error in TestHdfsConfigFields.java and 
TestMapreduceConfigFields.java
 Key: HDFS-8362
 URL: https://issues.apache.org/jira/browse/HDFS-8362
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad


in TestHdfsConfigFields.java failure is because of wrong package name.
in TestMapreduceConfigFields.java failure i s becuase of:
i) wrong package name
ii) missing imports



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8097) TestFileTruncate is failing intermittently

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536427#comment-14536427
 ] 

Hudson commented on HDFS-8097:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8097. TestFileTruncate is failing intermittently. (Contributed by Rakesh 
R) (arp: rev 59995cec4ad9efcef7d4641375ca3eb40e2429ef)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


> TestFileTruncate is failing intermittently
> --
>
> Key: HDFS-8097
> URL: https://issues.apache.org/jira/browse/HDFS-8097
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8097-001.patch, HDFS-8097-002.patch, 
> HDFS-8097-003.patch
>
>
> {code}
> java.lang.AssertionError: Bad disk space usage expected:<45> but was:<12>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8326) Documentation about when checkpoints are run is out of date

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536462#comment-14536462
 ] 

Hudson commented on HDFS-8326:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8326. Documentation about when checkpoints are run is out of date. (Misty 
Stanley-Jones via xyao) (xyao: rev d0e75e60fb16ffd6c95648a06ff3958722f71e4d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md


> Documentation about when checkpoints are run is out of date
> ---
>
> Key: HDFS-8326
> URL: https://issues.apache.org/jira/browse/HDFS-8326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.3.0
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Fix For: 2.8.0
>
> Attachments: HDFS-8326.001.patch, HDFS-8326.002.patch, 
> HDFS-8326.003.patch, HDFS-8326.004.patch, HDFS-8326.patch
>
>
> Apparently checkpointing by interval or transaction size are both supported 
> in at least HDFS 2.3, but the documentation does not reflect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7433) Optimize performance of DatanodeManager's node map

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536434#comment-14536434
 ] 

Hudson commented on HDFS-7433:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-7433. Optimize performance of DatanodeManager's node map. Contributed by 
Daryn Sharp. (kihwal: rev 7a7960be41c32f20ffec9fea811878b113da62db)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Optimize performance of DatanodeManager's node map
> --
>
> Key: HDFS-7433
> URL: https://issues.apache.org/jira/browse/HDFS-7433
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7433.patch, HDFS-7433.patch, HDFS-7433.patch, 
> HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6757) Simplify lease manager with INodeID

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536428#comment-14536428
 ] 

Hudson commented on HDFS-6757:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-6757. Simplify lease manager with INodeID. Contributed by Haohui Mai. 
(wheat9: rev 00fe1ed3a4b3ee35fe24be257ec36445d2f44d63)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestLeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
Add missing entry in CHANGES.txt for HDFS-6757. (wheat9: rev 
3becc3af8382caed2c3bf941f8fed6daf6e7bc26)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Simplify lease manager with INodeID
> ---
>
> Key: HDFS-6757
> URL: https://issues.apache.org/jira/browse/HDFS-6757
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HDFS-6757.000.patch, HDFS-6757.001.patch, 
> HDFS-6757.002.patch, HDFS-6757.003.patch, HDFS-6757.004.patch, 
> HDFS-6757.005.patch, HDFS-6757.006.patch, HDFS-6757.007.patch, 
> HDFS-6757.008.patch, HDFS-6757.009.patch, HDFS-6757.010.patch, 
> HDFS-6757.011.patch, HDFS-6757.012.patch, HDFS-6757.013.patch, 
> HDFS-6757.014.patch, HDFS-6757.015.patch, HDFS-6757.016.patch, 
> HDFS-6757.017.patch
>
>
> Currently the lease manager records leases based on path instead of inode 
> ids. Therefore, the lease manager needs to carefully keep track of the path 
> of active leases during renames and deletes. This can be a non-trivial task.
> This jira proposes to simplify the logic by tracking leases using inodeids 
> instead of paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5640) Add snapshot methods to FileContext.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536424#comment-14536424
 ] 

Hudson commented on HDFS-5640:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-5640. Add snapshot methods to FileContext. Contributed by Rakesh R. 
(cnauroth: rev 26f61d41df9e90a5053d9265f535cc492196f2a5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileContextSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add snapshot methods to FileContext.
> 
>
> Key: HDFS-5640
> URL: https://issues.apache.org/jira/browse/HDFS-5640
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Chris Nauroth
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-5640-001.patch, HDFS-5640-002.patch, 
> HDFS-5640-003.patch, HDFS-5640-004.patch, HDFS-5640-005.patch, 
> HDFS-5640-007.patch, HDFS-5640-007.patch
>
>
> Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
> For feature parity, these methods need to be added to {{FileContext}}.  This 
> would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536458#comment-14536458
 ] 

Hudson commented on HDFS-8346:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8346. libwebhdfs build fails during link due to unresolved external 
symbols. Contributed by Chris Nauroth. (wheat9: rev 
f4ebbc6afc1297dced54bd2bd671e587c4ceb2fc)
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.h
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.c
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8284) Update documentation about how to use HTrace with HDFS

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536436#comment-14536436
 ] 

Hudson commented on HDFS-8284:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8284. Update documentation about how to use HTrace with HDFS (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
8f7c2364d7254a1d987b095ba442bf20727796f8)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md


> Update documentation about how to use HTrace with HDFS
> --
>
> Key: HDFS-8284
> URL: https://issues.apache.org/jira/browse/HDFS-8284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8284.001.patch, HDFS-8284.002.patch, 
> HDFS-8284.003.patch
>
>
> Tracing originated in DFSClient uses configuration keys prefixed with 
> "dfs.client.htrace" after HDFS-8213. Server side tracing uses conf keys 
> prefixed with "dfs.htrace".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8113) Add check for null BlockCollection pointers in BlockInfoContiguous structures

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536473#comment-14536473
 ] 

Hudson commented on HDFS-8113:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8113. Add check for null BlockCollection pointers in BlockInfoContiguous 
structures (Chengbing Liu via Colin P. McCabe) (cmccabe: rev 
f523e963e4d88e4e459352387c6efeab59e7a809)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add check for null BlockCollection pointers in BlockInfoContiguous structures
> -
>
> Key: HDFS-8113
> URL: https://issues.apache.org/jira/browse/HDFS-8113
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8113.02.patch, HDFS-8113.patch
>
>
> The following copy constructor can throw NullPointerException if {{bc}} is 
> null.
> {code}
>   protected BlockInfoContiguous(BlockInfoContiguous from) {
> this(from, from.bc.getBlockReplication());
> this.bc = from.bc;
>   }
> {code}
> We have observed that some DataNodes keeps failing doing block reports with 
> NameNode. The stacktrace is as follows. Though we are not using the latest 
> version, the problem still exists.
> {quote}
> 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> RemoteException in offerService
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
> at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536460#comment-14536460
 ] 

Hudson commented on HDFS-8311:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8311. DataStreamer.transfer() should timeout the socket InputStream. 
(Esteban Gutierrez via Yongjun Zhang) (yzhang: rev 
730f9930a48259f34e48404aee51e8d641cc3d36)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> DataStreamer.transfer() should timeout the socket InputStream.
> --
>
> Key: HDFS-8311
> URL: https://issues.apache.org/jira/browse/HDFS-8311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.8.0
>
> Attachments: 
> 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, 
> HDFS-8311.001.patch
>
>
> While validating some HA failure modes we found that HDFS clients can take a 
> long time to recover or sometimes don't recover at all since we don't setup 
> the socket timeout in the InputStream:
> {code}
> private void transfer () { ...
> ...
>  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
>  InputStream unbufIn = NetUtils.getInputStream(sock);
> ...
> }
> {code}
> The InputStream should have its own timeout in the same way as the 
> OutputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8340) Fix NFS documentation of nfs.wtmax

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536438#comment-14536438
 ] 

Hudson commented on HDFS-8340:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8340. Fix NFS documentation of nfs.wtmax. (Contributed by Ajith S) (arp: 
rev a2d40bced9f793c8c4193f1447425ca7f3f8f357)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md


> Fix NFS documentation of nfs.wtmax
> --
>
> Key: HDFS-8340
> URL: https://issues.apache.org/jira/browse/HDFS-8340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Minor
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8340.patch
>
>
> According to documentation
> http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> bq. For larger data transfer size, one needs to update “nfs.rtmax” and 
> “nfs.rtmax” in hdfs-site.xml.
> nfs.rtmax is mentioned twice, instead it should be  “nfs.rtmax” and 
> “nfs.wtmax” 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8274) NFS configuration nfs.dump.dir not working

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536456#comment-14536456
 ] 

Hudson commented on HDFS-8274:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8274. NFS configuration nfs.dump.dir not working (Contributed by Ajith S) 
(arp: rev cd6b26cce7457d08346b9d90a5f2f333ba4202d8)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java


> NFS configuration nfs.dump.dir not working
> --
>
> Key: HDFS-8274
> URL: https://issues.apache.org/jira/browse/HDFS-8274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8274.patch
>
>
> As per the document 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> we can configure
> {quote} 
> nfs.dump.dir
> {quote}
> as nfs file dump directory, but using this configuration in *hdfs-site.xml* 
> doesn't work and when nfs gateway is started, default location is used i.e 
> \tmp\.hdfs-nfs
> The reason being the key expected in *NfsConfigKeys.java*
> {code}
> public static final String DFS_NFS_FILE_DUMP_DIR_KEY = "nfs.file.dump.dir";
> {code}
> we can change it to *nfs.dump.dir* instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8245) Standby namenode doesn't process DELETED_BLOCK if the add block request is in edit log.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536432#comment-14536432
 ] 

Hudson commented on HDFS-8245:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock 
request is in edit log. Contributed by Rushabh S Shah. (kihwal: rev 
2d4ae3d18bc530fa9f81ee616db8af3395705fb9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java


> Standby namenode doesn't process DELETED_BLOCK if the add block request is in 
> edit log.
> ---
>
> Key: HDFS-8245
> URL: https://issues.apache.org/jira/browse/HDFS-8245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-8245-1.patch, HDFS-8245.patch
>
>
> The following series of events happened on Standby namenode :
> 2015-04-09 07:47:21,735 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode Active Namenode (ANN)
> 2015-04-09 07:58:01,858 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode ANN
> The following series of events happened on Active Namenode:,
> 2015-04-09 07:47:21,747 \[IPC Server handler 99 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from Standby NN (SNN)
> 2015-04-09 07:58:01,868 \[IPC Server handler 18 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from SNN
> The following series of events happened on datanode ( {color:red} datanodeA 
> {color}):
> 2015-04-09 07:52:15,817 \[DataXceiver for client 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1 at /:51078 
> \[Receiving block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Receiving 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 src: 
> /client:51078 dest: /{color:red}datanodeA:1004{color}
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO DataNode.clienttrace: src: 
> /client:51078, dest: /{color:red}datanodeA:1004{color}, bytes: 20, op: 
> HDFS_WRITE, cliID: 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1, offset: 0, 
> srvID: 356a8a98-826f-446d-8f4c-ce288c1f0a75, blockid: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, duration: 
> 148948385
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO datanode.DataNode: PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-04-09 07:52:25,970 \[DataXceiver for client /{color:red} 
> {color}:52827 \[Copying block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Copied 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 to 
> <{color:red}datanodeB{color}>:52827
> 2015-04-09 07:52:28,187 \[DataNode:   heartbeating to ANN:8020] INFO 
> impl.FsDatasetAsyncDiskService: Scheduling blk_1570321882_1102029183867 file 
> /blk_1570321882 for deletion
> 2015-04-09 07:52:28,188 \[Async disk worker #1482 for volume ] INFO 
> impl.FsDatasetAsyncDiskService: Deleted BP-595383232--1360869396230 
> blk_1570321882_1102029183867 file /blk_1570321882
> Then we failover for upgrade and then the standby became active.
> When we did  ls command on this file, we got the following exception:
> 15/04/09 22:07:39 WARN hdfs.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error for OP_READ_BLOCK, self=/client:32947, 
> remote={color:red}datanodeA:1004{color}, for file , for pool 
> BP-595383232--1360869396230 block 1570321882_1102029183867
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:445)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:410)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:815)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(Block

[jira] [Commented] (HDFS-6285) tidy an error log inside BlockReceiver

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536450#comment-14536450
 ] 

Hudson commented on HDFS-6285:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
HDFS-6285. tidy an error log inside BlockReceiver. Contributed by Liang Xie. 
(umamahesh: rev 7b1ea9c481fb8c13fc7b64eb1894d96ddfbf4b5b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> tidy an error log inside BlockReceiver
> --
>
> Key: HDFS-6285
> URL: https://issues.apache.org/jira/browse/HDFS-6285
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>Priority: Minor
>  Labels: BB2015-05-RFC
> Attachments: HDFS-6285.txt
>
>
> From this log from our production cluster:
> 2014-04-22,10:39:05,476 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> IOException in BlockReceiver constructor. Cause is 
> After reading code, i knew the cause was null which means no disk error. but 
> the above log looked fragmentary. Attached is a minor change to tidy it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8113) Add check for null BlockCollection pointers in BlockInfoContiguous structures

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536421#comment-14536421
 ] 

Hudson commented on HDFS-8113:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-8113. Add check for null BlockCollection pointers in BlockInfoContiguous 
structures (Chengbing Liu via Colin P. McCabe) (cmccabe: rev 
f523e963e4d88e4e459352387c6efeab59e7a809)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java


> Add check for null BlockCollection pointers in BlockInfoContiguous structures
> -
>
> Key: HDFS-8113
> URL: https://issues.apache.org/jira/browse/HDFS-8113
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-8113.02.patch, HDFS-8113.patch
>
>
> The following copy constructor can throw NullPointerException if {{bc}} is 
> null.
> {code}
>   protected BlockInfoContiguous(BlockInfoContiguous from) {
> this(from, from.bc.getBlockReplication());
> this.bc = from.bc;
>   }
> {code}
> We have observed that some DataNodes keeps failing doing block reports with 
> NameNode. The stacktrace is as follows. Though we are not using the latest 
> version, the problem still exists.
> {quote}
> 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> RemoteException in offerService
> org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
> at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8245) Standby namenode doesn't process DELETED_BLOCK if the add block request is in edit log.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536383#comment-14536383
 ] 

Hudson commented on HDFS-8245:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-8245. Standby namenode doesn't process DELETED_BLOCK if the addblock 
request is in edit log. Contributed by Rushabh S Shah. (kihwal: rev 
2d4ae3d18bc530fa9f81ee616db8af3395705fb9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Standby namenode doesn't process DELETED_BLOCK if the add block request is in 
> edit log.
> ---
>
> Key: HDFS-8245
> URL: https://issues.apache.org/jira/browse/HDFS-8245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: HDFS-8245-1.patch, HDFS-8245.patch
>
>
> The following series of events happened on Standby namenode :
> 2015-04-09 07:47:21,735 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode Active Namenode (ANN)
> 2015-04-09 07:58:01,858 \[Edit log tailer] INFO ha.EditLogTailer: Triggering 
> log roll on remote NameNode ANN
> The following series of events happened on Active Namenode:,
> 2015-04-09 07:47:21,747 \[IPC Server handler 99 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from Standby NN (SNN)
> 2015-04-09 07:58:01,868 \[IPC Server handler 18 on 8020] INFO 
> namenode.FSNamesystem: Roll Edit Log from SNN
> The following series of events happened on datanode ( {color:red} datanodeA 
> {color}):
> 2015-04-09 07:52:15,817 \[DataXceiver for client 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1 at /:51078 
> \[Receiving block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Receiving 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 src: 
> /client:51078 dest: /{color:red}datanodeA:1004{color}
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO DataNode.clienttrace: src: 
> /client:51078, dest: /{color:red}datanodeA:1004{color}, bytes: 20, op: 
> HDFS_WRITE, cliID: 
> DFSClient_attempt_1428022041757_102831_r_000107_0_1139131345_1, offset: 0, 
> srvID: 356a8a98-826f-446d-8f4c-ce288c1f0a75, blockid: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, duration: 
> 148948385
> 2015-04-09 07:52:15,969 \[PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE] INFO datanode.DataNode: PacketResponder: 
> BP-595383232--1360869396230:blk_1570321882_1102029183867, 
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-04-09 07:52:25,970 \[DataXceiver for client /{color:red} 
> {color}:52827 \[Copying block 
> BP-595383232--1360869396230:blk_1570321882_1102029183867]] INFO 
> datanode.DataNode: Copied 
> BP-595383232--1360869396230:blk_1570321882_1102029183867 to 
> <{color:red}datanodeB{color}>:52827
> 2015-04-09 07:52:28,187 \[DataNode:   heartbeating to ANN:8020] INFO 
> impl.FsDatasetAsyncDiskService: Scheduling blk_1570321882_1102029183867 file 
> /blk_1570321882 for deletion
> 2015-04-09 07:52:28,188 \[Async disk worker #1482 for volume ] INFO 
> impl.FsDatasetAsyncDiskService: Deleted BP-595383232--1360869396230 
> blk_1570321882_1102029183867 file /blk_1570321882
> Then we failover for upgrade and then the standby became active.
> When we did  ls command on this file, we got the following exception:
> 15/04/09 22:07:39 WARN hdfs.BlockReaderFactory: I/O error constructing remote 
> block reader.
> java.io.IOException: Got error for OP_READ_BLOCK, self=/client:32947, 
> remote={color:red}datanodeA:1004{color}, for file , for pool 
> BP-595383232--1360869396230 block 1570321882_1102029183867
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:445)
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:410)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:815)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:693)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory

[jira] [Commented] (HDFS-7433) Optimize performance of DatanodeManager's node map

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536385#comment-14536385
 ] 

Hudson commented on HDFS-7433:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-7433. Optimize performance of DatanodeManager's node map. Contributed by 
Daryn Sharp. (kihwal: rev 7a7960be41c32f20ffec9fea811878b113da62db)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Optimize performance of DatanodeManager's node map
> --
>
> Key: HDFS-7433
> URL: https://issues.apache.org/jira/browse/HDFS-7433
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HDFS-7433.patch, HDFS-7433.patch, HDFS-7433.patch, 
> HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536413#comment-14536413
 ] 

Hudson commented on HDFS-7559:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-7559. Create unit test to automatically compare HDFS related classes and 
hdfs-default.xml. (Ray Chiang via asuresh) (Arun Suresh: rev 
3cefc02af73faa12a6edce904b98ba543167bec5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8340) Fix NFS documentation of nfs.wtmax

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536389#comment-14536389
 ] 

Hudson commented on HDFS-8340:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-8340. Fix NFS documentation of nfs.wtmax. (Contributed by Ajith S) (arp: 
rev a2d40bced9f793c8c4193f1447425ca7f3f8f357)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix NFS documentation of nfs.wtmax
> --
>
> Key: HDFS-8340
> URL: https://issues.apache.org/jira/browse/HDFS-8340
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>Priority: Minor
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8340.patch
>
>
> According to documentation
> http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> bq. For larger data transfer size, one needs to update “nfs.rtmax” and 
> “nfs.rtmax” in hdfs-site.xml.
> nfs.rtmax is mentioned twice, instead it should be  “nfs.rtmax” and 
> “nfs.wtmax” 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8274) NFS configuration nfs.dump.dir not working

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536405#comment-14536405
 ] 

Hudson commented on HDFS-8274:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-8274. NFS configuration nfs.dump.dir not working (Contributed by Ajith S) 
(arp: rev cd6b26cce7457d08346b9d90a5f2f333ba4202d8)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NFS configuration nfs.dump.dir not working
> --
>
> Key: HDFS-8274
> URL: https://issues.apache.org/jira/browse/HDFS-8274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>Assignee: Ajith S
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: HDFS-8274.patch
>
>
> As per the document 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html
> we can configure
> {quote} 
> nfs.dump.dir
> {quote}
> as nfs file dump directory, but using this configuration in *hdfs-site.xml* 
> doesn't work and when nfs gateway is started, default location is used i.e 
> \tmp\.hdfs-nfs
> The reason being the key expected in *NfsConfigKeys.java*
> {code}
> public static final String DFS_NFS_FILE_DUMP_DIR_KEY = "nfs.file.dump.dir";
> {code}
> we can change it to *nfs.dump.dir* instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8346) libwebhdfs build fails during link due to unresolved external symbols.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536407#comment-14536407
 ] 

Hudson commented on HDFS-8346:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-8346. libwebhdfs build fails during link due to unresolved external 
symbols. Contributed by Chris Nauroth. (wheat9: rev 
f4ebbc6afc1297dced54bd2bd671e587c4ceb2fc)
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.h
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_http_client.c
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/CMakeLists.txt


> libwebhdfs build fails during link due to unresolved external symbols.
> --
>
> Key: HDFS-8346
> URL: https://issues.apache.org/jira/browse/HDFS-8346
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8346.001.patch
>
>
> The libwebhdfs build is currently broken due to various unresolved external 
> symbols during link.  Multiple patches have introduced a few different forms 
> of this breakage.  See comments for full details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5640) Add snapshot methods to FileContext.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536375#comment-14536375
 ] 

Hudson commented on HDFS-5640:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-5640. Add snapshot methods to FileContext. Contributed by Rakesh R. 
(cnauroth: rev 26f61d41df9e90a5053d9265f535cc492196f2a5)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileContextSnapshot.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java


> Add snapshot methods to FileContext.
> 
>
> Key: HDFS-5640
> URL: https://issues.apache.org/jira/browse/HDFS-5640
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, snapshots
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Chris Nauroth
>Assignee: Rakesh R
> Fix For: 2.8.0
>
> Attachments: HDFS-5640-001.patch, HDFS-5640-002.patch, 
> HDFS-5640-003.patch, HDFS-5640-004.patch, HDFS-5640-005.patch, 
> HDFS-5640-007.patch, HDFS-5640-007.patch
>
>
> Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
> For feature parity, these methods need to be added to {{FileContext}}.  This 
> would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8284) Update documentation about how to use HTrace with HDFS

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536387#comment-14536387
 ] 

Hudson commented on HDFS-8284:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-8284. Update documentation about how to use HTrace with HDFS (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
8f7c2364d7254a1d987b095ba442bf20727796f8)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md


> Update documentation about how to use HTrace with HDFS
> --
>
> Key: HDFS-8284
> URL: https://issues.apache.org/jira/browse/HDFS-8284
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8284.001.patch, HDFS-8284.002.patch, 
> HDFS-8284.003.patch
>
>
> Tracing originated in DFSClient uses configuration keys prefixed with 
> "dfs.client.htrace" after HDFS-8213. Server side tracing uses conf keys 
> prefixed with "dfs.htrace".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536409#comment-14536409
 ] 

Hudson commented on HDFS-8311:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #191 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/191/])
HDFS-8311. DataStreamer.transfer() should timeout the socket InputStream. 
(Esteban Gutierrez via Yongjun Zhang) (yzhang: rev 
730f9930a48259f34e48404aee51e8d641cc3d36)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> DataStreamer.transfer() should timeout the socket InputStream.
> --
>
> Key: HDFS-8311
> URL: https://issues.apache.org/jira/browse/HDFS-8311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.8.0
>
> Attachments: 
> 0001-HDFS-8311-DataStreamer.transfer-should-timeout-the-s.patch, 
> HDFS-8311.001.patch
>
>
> While validating some HA failure modes we found that HDFS clients can take a 
> long time to recover or sometimes don't recover at all since we don't setup 
> the socket timeout in the InputStream:
> {code}
> private void transfer () { ...
> ...
>  OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
>  InputStream unbufIn = NetUtils.getInputStream(sock);
> ...
> }
> {code}
> The InputStream should have its own timeout in the same way as the 
> OutputStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >