[jira] [Created] (HDFS-5574) Remove buffer copy in BlockReader.skip

2013-11-27 Thread Binglin Chang (JIRA)
Binglin Chang created HDFS-5574:
---

 Summary: Remove buffer copy in BlockReader.skip
 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5574) Remove buffer copy in BlockReader.skip

2013-11-27 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5574:


Status: Patch Available  (was: Open)

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial

 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5574) Remove buffer copy in BlockReader.skip

2013-11-27 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5574:


Description: BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp 
buffer to read data to this buffer, it is not necessary. 

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial

 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5574) Remove buffer copy in BlockReader.skip

2013-11-27 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5574:


Attachment: HDFS-5574.v1.patch

Changes:
Refactor some code in BlockReaderLocal.skip and RemoteBlockReader2.skip
Add test for DFSInpustream.skip

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.v1.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5575) RawLocalFS::LocalFSFileInputStream.pread does not track FS::Statistics

2013-11-27 Thread Binglin Chang (JIRA)
Binglin Chang created HDFS-5575:
---

 Summary: RawLocalFS::LocalFSFileInputStream.pread does not track 
FS::Statistics
 Key: HDFS-5575
 URL: https://issues.apache.org/jira/browse/HDFS-5575
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor


RawLocalFS::LocalFSFileInputStream.pread does not track FS::Statistics



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5575) RawLocalFS::LocalFSFileInputStream.pread does not track FS::Statistics

2013-11-27 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang resolved HDFS-5575.
-

Resolution: Duplicate

Duplicate of HADOOP-10130

 RawLocalFS::LocalFSFileInputStream.pread does not track FS::Statistics
 --

 Key: HDFS-5575
 URL: https://issues.apache.org/jira/browse/HDFS-5575
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor

 RawLocalFS::LocalFSFileInputStream.pread does not track FS::Statistics



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833607#comment-13833607
 ] 

Hadoop QA commented on HDFS-5568:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616007/HDFS-5568.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5588//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5588//console

This message is automatically generated.

 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
 HDFS-5568.patch, HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-3405) Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged fsimages

2013-11-27 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-3405:


Attachment: HDFS-3405.patch

 Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged 
 fsimages
 

 Key: HDFS-3405
 URL: https://issues.apache.org/jira/browse/HDFS-3405
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.0, 3.0.0, 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Vinay
 Attachments: HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 
 HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch


 As Todd points out in [this 
 comment|https://issues.apache.org/jira/browse/HDFS-3404?focusedCommentId=13272986page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13272986],
  the current scheme for a checkpointing daemon to upload a merged fsimage 
 file to an NN is to issue an HTTP get request to tell the target NN to issue 
 another GET request back to the checkpointing daemon to retrieve the merged 
 fsimage file. There's no fundamental reason the checkpointing daemon can't 
 just use an HTTP POST or PUT to send back the merged fsimage file, rather 
 than the double-GET scheme.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-27 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833643#comment-13833643
 ] 

Uma Maheswara Rao G commented on HDFS-5568:
---

+1 on the latest patch

 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
 HDFS-5568.patch, HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833652#comment-13833652
 ] 

Hudson commented on HDFS-5561:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #404 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/404/])
HDFS-5561. FSNameSystem#getNameJournalStatus() in JMX should return plain text 
instead of HTML. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545791)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLogger.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLoggerSet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestIPCLoggerChannel.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogFileOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.3.0

 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833650#comment-13833650
 ] 

Hudson commented on HDFS-5565:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #404 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/404/])
HDFS-5565. CacheAdmin help should match against non-dashed commands (wang via 
cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545850)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


 CacheAdmin help should match against non-dashed commands
 

 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
  Labels: caching, newbie
 Fix For: 3.0.0

 Attachments: hdfs-5565-1.patch


 Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, 
 but for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` 
 with a dash before the command name. This is inconsistent with dfsadmin, dfs, 
 and haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833651#comment-13833651
 ] 

Hudson commented on HDFS-5548:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #404 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/404/])
HDFS-5548. Use ConcurrentHashMap in portmap. Contributed by Haohui Mai 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545756)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapRequest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/RpcProgramPortmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Use ConcurrentHashMap in portmap
 

 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.2.1

 Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch, 
 HDFS-5548.002.patch, HDFS-5548.003.patch


 Portmap uses a HashMap to store the port mapping. It synchronizes the access 
 of the hash map by locking itself. It can be simplified by using a 
 ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833649#comment-13833649
 ] 

Hudson commented on HDFS-5286:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #404 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/404/])
HDFS-5286. Flatten INodeDirectory hierarchy: Replace INodeDirectoryWithQuota 
with DirectoryWithQuotaFeature. (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545768)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DirectoryWithQuotaFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryWithQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsLimits.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestDiff.java


 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 3.0.0

 Attachments: h5286_20131122.patch, h5286_20131125.patch, 
 h5286_20131125b.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-27 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833656#comment-13833656
 ] 

Uma Maheswara Rao G commented on HDFS-5568:
---

I have just committed this to trunk, branch-2 and 2.2. Thanks a lot, Vinay for 
the patch and thanks to Jing and Sathish for review.

 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
 HDFS-5568.patch, HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833661#comment-13833661
 ] 

Hudson commented on HDFS-5568:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4798 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4798/])
HDFS-5568. Support includeSnapshots option with Fsck command. Contributed by 
Vinay (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545987)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
 HDFS-5568.patch, HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5574) Remove buffer copy in BlockReader.skip

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833668#comment-13833668
 ] 

Hadoop QA commented on HDFS-5574:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616017/HDFS-5574.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5590//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5590//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5590//console

This message is automatically generated.

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.v1.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-27 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833674#comment-13833674
 ] 

Vinay commented on HDFS-5568:
-

Thanks a lot Uma, Jing and sathish 

 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
 HDFS-5568.patch, HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2882) DN continues to start up, even if block pool fails to initialize

2013-11-27 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833678#comment-13833678
 ] 

Vinay commented on HDFS-2882:
-

Hi [~tlipcon],  As you started the work on this jira, could you take a look at 
patch..? 
Thanks

 DN continues to start up, even if block pool fails to initialize
 

 Key: HDFS-2882
 URL: https://issues.apache.org/jira/browse/HDFS-2882
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Vinay
 Attachments: HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, 
 HDFS-2882.patch, HDFS-2882.patch, HDFS-2882.patch, hdfs-2882.txt


 I started a DN on a machine that was completely out of space on one of its 
 drives. I saw the following:
 2012-02-02 09:56:50,499 FATAL 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
 block pool Block pool BP-448349972-172.29.5.192-1323816762969 (storage id 
 DS-507718931-172.29.5.194-11072-12978
 42002148) service to styx01.sf.cloudera.com/172.29.5.192:8021
 java.io.IOException: Mkdirs failed to create 
 /data/1/scratch/todd/styx-datadir/current/BP-448349972-172.29.5.192-1323816762969/tmp
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset$BlockPoolSlice.init(FSDataset.java:335)
 but the DN continued to run, spewing NPEs when it tried to do block reports, 
 etc. This was on the HDFS-1623 branch but may affect trunk as well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-3405) Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged fsimages

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833715#comment-13833715
 ] 

Hadoop QA commented on HDFS-3405:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616020/HDFS-3405.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-client hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5591//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5591//console

This message is automatically generated.

 Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged 
 fsimages
 

 Key: HDFS-3405
 URL: https://issues.apache.org/jira/browse/HDFS-3405
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.0, 3.0.0, 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Vinay
 Attachments: HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 
 HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch


 As Todd points out in [this 
 comment|https://issues.apache.org/jira/browse/HDFS-3404?focusedCommentId=13272986page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13272986],
  the current scheme for a checkpointing daemon to upload a merged fsimage 
 file to an NN is to issue an HTTP get request to tell the target NN to issue 
 another GET request back to the checkpointing daemon to retrieve the merged 
 fsimage file. There's no fundamental reason the checkpointing daemon can't 
 just use an HTTP POST or PUT to send back the merged fsimage file, rather 
 than the double-GET scheme.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-3405) Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged fsimages

2013-11-27 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833753#comment-13833753
 ] 

Vinay commented on HDFS-3405:
-

Test failure is unrelated

 Checkpointing should use HTTP POST or PUT instead of GET-GET to send merged 
 fsimages
 

 Key: HDFS-3405
 URL: https://issues.apache.org/jira/browse/HDFS-3405
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.0, 3.0.0, 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Vinay
 Attachments: HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch, 
 HDFS-3405.patch, HDFS-3405.patch, HDFS-3405.patch


 As Todd points out in [this 
 comment|https://issues.apache.org/jira/browse/HDFS-3404?focusedCommentId=13272986page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13272986],
  the current scheme for a checkpointing daemon to upload a merged fsimage 
 file to an NN is to issue an HTTP get request to tell the target NN to issue 
 another GET request back to the checkpointing daemon to retrieve the merged 
 fsimage file. There's no fundamental reason the checkpointing daemon can't 
 just use an HTTP POST or PUT to send back the merged fsimage file, rather 
 than the double-GET scheme.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833772#comment-13833772
 ] 

Hudson commented on HDFS-5561:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1595/])
HDFS-5561. FSNameSystem#getNameJournalStatus() in JMX should return plain text 
instead of HTML. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545791)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLogger.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLoggerSet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestIPCLoggerChannel.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogFileOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.3.0

 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833771#comment-13833771
 ] 

Hudson commented on HDFS-5548:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1595/])
HDFS-5548. Use ConcurrentHashMap in portmap. Contributed by Haohui Mai 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545756)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapRequest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/RpcProgramPortmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Use ConcurrentHashMap in portmap
 

 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.2.1

 Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch, 
 HDFS-5548.002.patch, HDFS-5548.003.patch


 Portmap uses a HashMap to store the port mapping. It synchronizes the access 
 of the hash map by locking itself. It can be simplified by using a 
 ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833768#comment-13833768
 ] 

Hudson commented on HDFS-5286:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1595/])
HDFS-5286. Flatten INodeDirectory hierarchy: Replace INodeDirectoryWithQuota 
with DirectoryWithQuotaFeature. (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545768)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DirectoryWithQuotaFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryWithQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsLimits.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestDiff.java


 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 3.0.0

 Attachments: h5286_20131122.patch, h5286_20131125.patch, 
 h5286_20131125b.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833769#comment-13833769
 ] 

Hudson commented on HDFS-5565:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1595/])
HDFS-5565. CacheAdmin help should match against non-dashed commands (wang via 
cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545850)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


 CacheAdmin help should match against non-dashed commands
 

 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
  Labels: caching, newbie
 Fix For: 3.0.0

 Attachments: hdfs-5565-1.patch


 Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, 
 but for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` 
 with a dash before the command name. This is inconsistent with dfsadmin, dfs, 
 and haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833770#comment-13833770
 ] 

Hudson commented on HDFS-5568:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1595 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1595/])
HDFS-5568. Support includeSnapshots option with Fsck command. Contributed by 
Vinay (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545987)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
 HDFS-5568.patch, HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5561) FSNameSystem#getNameJournalStatus() in JMX should return plain text instead of HTML

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833780#comment-13833780
 ] 

Hudson commented on HDFS-5561:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1621 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1621/])
HDFS-5561. FSNameSystem#getNameJournalStatus() in JMX should return plain text 
instead of HTML. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545791)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLogger.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLoggerSet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestIPCLoggerChannel.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLogFileOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


 FSNameSystem#getNameJournalStatus() in JMX should return plain text instead 
 of HTML
 ---

 Key: HDFS-5561
 URL: https://issues.apache.org/jira/browse/HDFS-5561
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.3.0

 Attachments: HDFS-5561.000.patch, NNUI.PNG


 Currently FSNameSystem#getNameJournalStatus() returns the status of the 
 quorum stream, which is an HTML string. This should not happen since that 
 getNameJournalStatus() is a JMX call. This will confuse the downstream 
 clients (e.g., the web UI) and lead to incorrect result.
 This jira proposes to change the information to plain text.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833778#comment-13833778
 ] 

Hudson commented on HDFS-5568:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1621 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1621/])
HDFS-5568. Support includeSnapshots option with Fsck command. Contributed by 
Vinay (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545987)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
 HDFS-5568.patch, HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5286) Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833776#comment-13833776
 ] 

Hudson commented on HDFS-5286:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1621 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1621/])
HDFS-5286. Flatten INodeDirectory hierarchy: Replace INodeDirectoryWithQuota 
with DirectoryWithQuotaFeature. (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545768)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DirectoryWithQuotaFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryWithQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsLimits.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestDiff.java


 Flatten INodeDirectory hierarchy: add DirectoryWithQuotaFeature
 ---

 Key: HDFS-5286
 URL: https://issues.apache.org/jira/browse/HDFS-5286
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 3.0.0

 Attachments: h5286_20131122.patch, h5286_20131125.patch, 
 h5286_20131125b.patch


 Similar to the case of INodeFile (HFDS-5285), we should also flatten the 
 INodeDirectory hierarchy.
 This is the first step to add DirectoryWithQuotaFeature for replacing 
 INodeDirectoryWithQuota.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5548) Use ConcurrentHashMap in portmap

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833779#comment-13833779
 ] 

Hudson commented on HDFS-5548:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1621 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1621/])
HDFS-5548. Use ConcurrentHashMap in portmap. Contributed by Haohui Mai 
(brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545756)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapInterface.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/PortmapRequest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/portmap/RpcProgramPortmap.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/portmap/TestPortmap.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Use ConcurrentHashMap in portmap
 

 Key: HDFS-5548
 URL: https://issues.apache.org/jira/browse/HDFS-5548
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.2.1

 Attachments: HDFS-5548.000.patch, HDFS-5548.001.patch, 
 HDFS-5548.002.patch, HDFS-5548.003.patch


 Portmap uses a HashMap to store the port mapping. It synchronizes the access 
 of the hash map by locking itself. It can be simplified by using a 
 ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5565) CacheAdmin help should match against non-dashed commands

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833777#comment-13833777
 ] 

Hudson commented on HDFS-5565:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1621 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1621/])
HDFS-5565. CacheAdmin help should match against non-dashed commands (wang via 
cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1545850)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


 CacheAdmin help should match against non-dashed commands
 

 Key: HDFS-5565
 URL: https://issues.apache.org/jira/browse/HDFS-5565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: caching
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
  Labels: caching, newbie
 Fix For: 3.0.0

 Attachments: hdfs-5565-1.patch


 Using the shell, `hdfs dfsadmin -help refreshNamespace` returns help text, 
 but for cacheadmin, you have to specify `hdfs cacheadmin -help -addDirective` 
 with a dash before the command name. This is inconsistent with dfsadmin, dfs, 
 and haadmin, which also error when you provide a dash.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5574) Remove buffer copy in BlockReader.skip

2013-11-27 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5574:


Attachment: HDFS-5574.v2.patch

Changes: 
Add synchronized to RemoteBlockReader2.read(ByteBuffer buf) to fix findbug 
warning.

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.v1.patch, HDFS-5574.v2.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5568) Support inclusion of snapshot paths in Namenode fsck

2013-11-27 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-5568:
--

   Resolution: Fixed
Fix Version/s: 2.2.1
   2.3.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 Support inclusion of snapshot paths in Namenode fsck
 

 Key: HDFS-5568
 URL: https://issues.apache.org/jira/browse/HDFS-5568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Vinay
Assignee: Vinay
 Fix For: 3.0.0, 2.3.0, 2.2.1

 Attachments: HDFS-5568-1.patch, HDFS-5568.patch, HDFS-5568.patch, 
 HDFS-5568.patch, HDFS-5568.patch


 Support Fsck to check the snapshot paths also for inconsistency.
 Currently Fsck supports snapshot paths if path given explicitly refers to a 
 snapshot path.
 We have seen safemode problems in our clusters which were due to blocks 
 missing which were only present inside snapshots. But hdfs fsck / shows 
 HEALTHY. 
 So supporting snapshot paths also during fsck (may be by default or on 
 demand) would be helpful in these cases instead of specifying each and every 
 snapshottable directory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5574) Remove buffer copy in BlockReader.skip

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833891#comment-13833891
 ] 

Hadoop QA commented on HDFS-5574:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616049/HDFS-5574.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5592//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5592//console

This message is automatically generated.

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.v1.patch, HDFS-5574.v2.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5573) CacheAdmin doesn't work

2013-11-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-5573.


Resolution: Duplicate
  Assignee: (was: Andrew Wang)

 CacheAdmin doesn't work
 ---

 Key: HDFS-5573
 URL: https://issues.apache.org/jira/browse/HDFS-5573
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Fengdong Yu

 The code is compiled from the trunk, and run cacheAdmin on the Active NN,
 Exceptions as follow:
 {code}
 [hadoop@10 ~]$ hdfs cacheadmin -addPool test2
 Successfully added cache pool test2.
 [hadoop@10 ~]$ hdfs cacheadmin -addDirective -path /test/core-site.xml -pool 
 test2 -replication 3
 Added cache directive 3
 [hadoop@10 ~]$ 
 [hadoop@10 ~]$ 
 [hadoop@10 ~]$ 
 [hadoop@10 ~]$ hdfs cacheadmin -listDirectives
 Exception in thread main 
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
  Operation category READ is not supported in state standby
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1562)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1128)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCacheDirectives(FSNamesystem.java:7168)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1267)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1253)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCacheDirectives(ClientNamenodeProtocolServerSideTranslatorPB.java:1085)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1961)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1957)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1955)
   at org.apache.hadoop.ipc.Client.call(Client.java:1405)
   at org.apache.hadoop.ipc.Client.call(Client.java:1358)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
   at com.sun.proxy.$Proxy9.listCacheDirectives(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB$CacheEntriesIterator.makeRequest(ClientNamenodeProtocolTranslatorPB.java:1079)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB$CacheEntriesIterator.makeRequest(ClientNamenodeProtocolTranslatorPB.java:1064)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$32.hasNext(DistributedFileSystem.java:1656)
   at 
 org.apache.hadoop.hdfs.tools.CacheAdmin$ListCacheDirectiveInfoCommand.run(CacheAdmin.java:450)
   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5573) CacheAdmin doesn't work

2013-11-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833943#comment-13833943
 ] 

Colin Patrick McCabe commented on HDFS-5573:


This is a duplicate of HDFS-

 CacheAdmin doesn't work
 ---

 Key: HDFS-5573
 URL: https://issues.apache.org/jira/browse/HDFS-5573
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Fengdong Yu
Assignee: Andrew Wang

 The code is compiled from the trunk, and run cacheAdmin on the Active NN,
 Exceptions as follow:
 {code}
 [hadoop@10 ~]$ hdfs cacheadmin -addPool test2
 Successfully added cache pool test2.
 [hadoop@10 ~]$ hdfs cacheadmin -addDirective -path /test/core-site.xml -pool 
 test2 -replication 3
 Added cache directive 3
 [hadoop@10 ~]$ 
 [hadoop@10 ~]$ 
 [hadoop@10 ~]$ 
 [hadoop@10 ~]$ hdfs cacheadmin -listDirectives
 Exception in thread main 
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
  Operation category READ is not supported in state standby
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1562)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1128)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listCacheDirectives(FSNamesystem.java:7168)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1267)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$ServerSideCacheEntriesIterator.makeRequest(NameNodeRpcServer.java:1253)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listCacheDirectives(ClientNamenodeProtocolServerSideTranslatorPB.java:1085)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1961)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1957)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1955)
   at org.apache.hadoop.ipc.Client.call(Client.java:1405)
   at org.apache.hadoop.ipc.Client.call(Client.java:1358)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
   at com.sun.proxy.$Proxy9.listCacheDirectives(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB$CacheEntriesIterator.makeRequest(ClientNamenodeProtocolTranslatorPB.java:1079)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB$CacheEntriesIterator.makeRequest(ClientNamenodeProtocolTranslatorPB.java:1064)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
   at 
 org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$32.hasNext(DistributedFileSystem.java:1656)
   at 
 org.apache.hadoop.hdfs.tools.CacheAdmin$ListCacheDirectiveInfoCommand.run(CacheAdmin.java:450)
   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:84)
   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:89)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5564) Refactor tests in TestCacheDirectives

2013-11-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5564:
---

Priority: Trivial  (was: Major)

 Refactor tests in TestCacheDirectives
 -

 Key: HDFS-5564
 URL: https://issues.apache.org/jira/browse/HDFS-5564
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial

 Some of the tests in TestCacheDirectives start their own MiniDFSCluster to 
 get a new config, even though we already start a cluster in the @Before 
 function. This contributes to longer test runs and code duplication.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5564) Refactor tests in TestCacheDirectives

2013-11-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833958#comment-13833958
 ] 

Colin Patrick McCabe commented on HDFS-5564:


To be honest, I have never liked the style of test where a Before function 
sets up a {{MiniDFSCluster}}.  It doesn't allow you to set a different 
configuration, which a lot of these tests have to do.  It hides a potentially 
expensive operation in a function that people modifying the tests may or may 
not look at.  And if you want to add a test that doesn't use a 
{{MiniDFSCluster}], you can't (in other words, you have to ignore the cluster 
that gets started anyway).

I think it would be better to have a common startup function and/or object 
which the tests which want a standardized setup can use, and forget about the 
Before annotation.  Unless there is some way of the test passing information to 
the before function which I'm unaware of?

 Refactor tests in TestCacheDirectives
 -

 Key: HDFS-5564
 URL: https://issues.apache.org/jira/browse/HDFS-5564
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial

 Some of the tests in TestCacheDirectives start their own MiniDFSCluster to 
 get a new config, even though we already start a cluster in the @Before 
 function. This contributes to longer test runs and code duplication.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833968#comment-13833968
 ] 

Andrew Wang commented on HDFS-5556:
---

+1 LGTM, thanks colin

 add some more NameNode cache statistics, cache pool stats
 -

 Key: HDFS-5556
 URL: https://issues.apache.org/jira/browse/HDFS-5556
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5556.001.patch, HDFS-5556.002.patch, 
 HDFS-5556.003.patch


 Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5576) RPC#stopProxy() should log the class of proxy when IllegalArgumentException is encountered

2013-11-27 Thread Ted Yu (JIRA)
Ted Yu created HDFS-5576:


 Summary: RPC#stopProxy() should log the class of proxy when 
IllegalArgumentException is encountered
 Key: HDFS-5576
 URL: https://issues.apache.org/jira/browse/HDFS-5576
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


When investigating HBASE-10029, [~szetszwo] made the suggestion of logging the 
class of proxy when IllegalArgumentException is thrown.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5556:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

 add some more NameNode cache statistics, cache pool stats
 -

 Key: HDFS-5556
 URL: https://issues.apache.org/jira/browse/HDFS-5556
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HDFS-5556.001.patch, HDFS-5556.002.patch, 
 HDFS-5556.003.patch


 Add some more NameNode cache statistics and also cache pool statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5059) Unnecessary permission denied error when creating/deleting snapshots with a non-existent directory

2013-11-27 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833995#comment-13833995
 ] 

Jimmy Xiang commented on HDFS-5059:
---

This issue should have been fixed with HDFS-5111.

 Unnecessary permission denied error when creating/deleting snapshots with a 
 non-existent directory
 --

 Key: HDFS-5059
 URL: https://issues.apache.org/jira/browse/HDFS-5059
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 3.0.0
Reporter: Stephen Chu
Priority: Trivial
  Labels: newbie

 As a non-superuser, when you create and delete a snapshot but accidentally 
 specify a non-existent directory to snapshot, you will see an 
 extra/unnecessary permission denied error right after the No such file or 
 directory error.
 {code}
 [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -deleteSnapshot /user/schuf/ snap1
 deleteSnapshot: `/user/schuf/': No such file or directory
 deleteSnapshot: Permission denied
 [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -createSnapshot /user/schuf/ snap1
 createSnapshot: `/user/schuf/': No such file or directory
 createSnapshot: Permission denied
 {code}
 As the HDFS superuser, instead of the Permission denied error you'll get an 
 extra Directory does not exist error.
 {code}
 [root@hdfs-snapshots-vanilla ~]# hdfs dfs -deleteSnapshot /user/schuf/ snap1
 deleteSnapshot: `/user/schuf/': No such file or directory
 deleteSnapshot: Directory does not exist: /user/schuf
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5576) RPC#stopProxy() should log the class of proxy when IllegalArgumentException is encountered

2013-11-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833994#comment-13833994
 ] 

Jing Zhao commented on HDFS-5576:
-

I think this jira should be moved to HADOOP?

 RPC#stopProxy() should log the class of proxy when IllegalArgumentException 
 is encountered
 --

 Key: HDFS-5576
 URL: https://issues.apache.org/jira/browse/HDFS-5576
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor

 When investigating HBASE-10029, [~szetszwo] made the suggestion of logging 
 the class of proxy when IllegalArgumentException is thrown.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5556) add some more NameNode cache statistics, cache pool stats

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13833999#comment-13833999
 ] 

Hudson commented on HDFS-5556:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4800 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4800/])
HDFS-5556. Add some more NameNode cache statistics, cache pool stats (cmccabe) 
(cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1546143)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolEntry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CachePoolStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java


 add some more NameNode cache statistics, cache pool stats
 -

 Key: HDFS-5556
 URL: https://issues.apache.org/jira/browse/HDFS-5556
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

[jira] [Updated] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-27 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5545:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk.

 Allow specifying endpoints for listeners in HttpServer
 --

 Key: HDFS-5545
 URL: https://issues.apache.org/jira/browse/HDFS-5545
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0

 Attachments: HDFS-5545.000.patch, HDFS-5545.001.patch, 
 HDFS-5545.002.patch, HDFS-5545.003.patch


 Currently HttpServer listens to HTTP port and provides a method to allow the 
 users to add an SSL listeners after the server starts. This complicates the 
 logic if the client needs to set up HTTP / HTTPS serverfs.
 This jira proposes to replace these two methods with the concepts of listener 
 endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
 the HttpServer should listen to. This concept simplifies the task of managing 
 the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5545) Allow specifying endpoints for listeners in HttpServer

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834009#comment-13834009
 ] 

Hudson commented on HDFS-5545:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4801 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4801/])
HDFS-5545. Allow specifying endpoints for listeners in HttpServer. Contributed 
by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1546151)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestGlobalFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestPathFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobEndNotifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java


 Allow specifying endpoints for listeners in HttpServer
 --

 Key: HDFS-5545
 URL: https://issues.apache.org/jira/browse/HDFS-5545
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0

 Attachments: HDFS-5545.000.patch, HDFS-5545.001.patch, 
 HDFS-5545.002.patch, HDFS-5545.003.patch


 Currently HttpServer listens to HTTP port and provides a method to allow the 
 users to add an SSL listeners after the server starts. This complicates the 
 logic if the client needs to set up HTTP / HTTPS serverfs.
 This jira proposes to replace these two methods with the concepts of listener 
 endpoints. A listener endpoints is a URI (i.e., scheme + host + port) that 
 the HttpServer should listen to. This concept simplifies the task of managing 
 the HTTP server from HDFS / YARN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5563:
-

Attachment: HDFS-5563.002.patch

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5536) Implement HTTP policy for Namenode and DataNode

2013-11-27 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5536:
-

Attachment: HDFS-5536.005.patch

 Implement HTTP policy for Namenode and DataNode
 ---

 Key: HDFS-5536
 URL: https://issues.apache.org/jira/browse/HDFS-5536
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5536.000.patch, HDFS-5536.001.patch, 
 HDFS-5536.002.patch, HDFS-5536.003.patch, HDFS-5536.004.patch, 
 HDFS-5536.005.patch


 this jira implements the http and https policy in the namenode and the 
 datanode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5577) NFS user guide update

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5577:
-

Issue Type: Improvement  (was: Bug)

 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial

 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5577) NFS user guide update

2013-11-27 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5577:


 Summary: NFS user guide update
 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial


dfs.access.time.precision is deprecated and the doc should use 
dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5577) NFS user guide update

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5577:
-

Status: Patch Available  (was: Open)

 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-5577.patch


 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5577) NFS user guide update

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5577:
-

Attachment: HDFS-5577.patch

 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-5577.patch


 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834046#comment-13834046
 ] 

Hadoop QA commented on HDFS-5563:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616094/HDFS-5563.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5593//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5593//console

This message is automatically generated.

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5577) NFS user guide update

2013-11-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834064#comment-13834064
 ] 

Jing Zhao commented on HDFS-5577:
-

+1

 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-5577.patch


 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5564) Refactor tests in TestCacheDirectives

2013-11-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834066#comment-13834066
 ] 

Andrew Wang commented on HDFS-5564:
---

My quick audit of that test file was that there are two configs being used, 
probably from when the two test files were merged originally. In this case, I 
think it's better to just re-split them and relegate the common setup/teardown 
to shared @Before and @After methods.

I agree though that generally, if each test needs a different conf or setup, 
then we should just do it in each test separately. If this comes up, then we 
can split off another test file to do so. Maybe even as part of this refactor 
if we find any tests like that.

 Refactor tests in TestCacheDirectives
 -

 Key: HDFS-5564
 URL: https://issues.apache.org/jira/browse/HDFS-5564
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial

 Some of the tests in TestCacheDirectives start their own MiniDFSCluster to 
 get a new config, even though we already start a cluster in the @Before 
 function. This contributes to longer test runs and code duplication.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5537:
-

 Component/s: snapshots
Hadoop Flags: Reviewed

+1 patch looks good.

 Remove FileWithSnapshot interface
 -

 Key: HDFS-5537
 URL: https://issues.apache.org/jira/browse/HDFS-5537
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0

 Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch, 
 HDFS-5537.002.patch, HDFS-5537.003.patch, HDFS-5537.003.patch, 
 HDFS-5537.004.patch, HDFS-5537.004.patch


 We use the FileWithSnapshot interface to define a set of methods shared by 
 INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
 the Under-Construction feature to replace the INodeFileUC and 
 INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5537:
-

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Jing!

 Remove FileWithSnapshot interface
 -

 Key: HDFS-5537
 URL: https://issues.apache.org/jira/browse/HDFS-5537
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0

 Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch, 
 HDFS-5537.002.patch, HDFS-5537.003.patch, HDFS-5537.003.patch, 
 HDFS-5537.004.patch, HDFS-5537.004.patch


 We use the FileWithSnapshot interface to define a set of methods shared by 
 INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
 the Under-Construction feature to replace the INodeFileUC and 
 INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5537) Remove FileWithSnapshot interface

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834129#comment-13834129
 ] 

Hudson commented on HDFS-5537:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4802 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4802/])
HDFS-5537. Remove FileWithSnapshot interface.  Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1546184)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


 Remove FileWithSnapshot interface
 -

 Key: HDFS-5537
 URL: https://issues.apache.org/jira/browse/HDFS-5537
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0

 Attachments: HDFS-5537.000.patch, HDFS-5537.001.patch, 
 HDFS-5537.002.patch, HDFS-5537.003.patch, HDFS-5537.003.patch, 
 HDFS-5537.004.patch, HDFS-5537.004.patch


 We use the FileWithSnapshot interface to define a set of methods shared by 
 INodeFileWithSnapshot and INodeFileUnderConstructionWithSnapshot. After using 
 the Under-Construction feature to replace the INodeFileUC and 
 INodeFileUCWithSnapshot, we no longer need this interface.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2013-11-27 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HDFS-5578:
-

Attachment: 5578-branch-2.patch
5578-trunk.patch

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments
 -

 Key: HDFS-5578
 URL: https://issues.apache.org/jira/browse/HDFS-5578
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Purtell
Priority: Minor
 Attachments: 5578-branch-2.patch, 5578-trunk.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2013-11-27 Thread Andrew Purtell (JIRA)
Andrew Purtell created HDFS-5578:


 Summary: [JDK8] Fix Javadoc errors caused by incorrect or illegal 
tags in doc comments
 Key: HDFS-5578
 URL: https://issues.apache.org/jira/browse/HDFS-5578
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Purtell
Priority: Minor
 Attachments: 5578-branch-2.patch, 5578-trunk.patch

Javadoc is more strict by default in JDK8 and will error out on malformed or 
illegal tags found in doc comments. Although tagged as JDK8 all of the required 
changes are generic Javadoc cleanups.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5577) NFS user guide update

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834170#comment-13834170
 ] 

Hadoop QA commented on HDFS-5577:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616099/HDFS-5577.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5595//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5595//console

This message is automatically generated.

 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-5577.patch


 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5536) Implement HTTP policy for Namenode and DataNode

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834171#comment-13834171
 ] 

Hadoop QA commented on HDFS-5536:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616092/HDFS-5536.005.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5594//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5594//console

This message is automatically generated.

 Implement HTTP policy for Namenode and DataNode
 ---

 Key: HDFS-5536
 URL: https://issues.apache.org/jira/browse/HDFS-5536
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5536.000.patch, HDFS-5536.001.patch, 
 HDFS-5536.002.patch, HDFS-5536.003.patch, HDFS-5536.004.patch, 
 HDFS-5536.005.patch


 this jira implements the http and https policy in the namenode and the 
 datanode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5536) Implement HTTP policy for Namenode and DataNode

2013-11-27 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5536:
-

Attachment: HDFS-5536.006.patch

 Implement HTTP policy for Namenode and DataNode
 ---

 Key: HDFS-5536
 URL: https://issues.apache.org/jira/browse/HDFS-5536
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5536.000.patch, HDFS-5536.001.patch, 
 HDFS-5536.002.patch, HDFS-5536.003.patch, HDFS-5536.004.patch, 
 HDFS-5536.005.patch, HDFS-5536.006.patch


 this jira implements the http and https policy in the namenode and the 
 datanode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5577) NFS user guide update

2013-11-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834220#comment-13834220
 ] 

Brandon Li commented on HDFS-5577:
--

Thank you, Jing. I've committed the patch.

 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-5577.patch


 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5577) NFS user guide update

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834233#comment-13834233
 ] 

Hudson commented on HDFS-5577:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4804 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4804/])
HDFS-5577. NFS user guide update. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1546210)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm


 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-5577.patch


 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5577) NFS user guide update

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5577:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Fix For: 2.2.1

 Attachments: HDFS-5577.patch


 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5577) NFS user guide update

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5577:
-

Fix Version/s: 2.2.1

 NFS user guide update
 -

 Key: HDFS-5577
 URL: https://issues.apache.org/jira/browse/HDFS-5577
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Fix For: 2.2.1

 Attachments: HDFS-5577.patch


 dfs.access.time.precision is deprecated and the doc should use 
 dfs.namenode.accesstime.precision instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5563:
-

Attachment: HDFS-5563.003.patch

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
 HDFS-5563.003.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834311#comment-13834311
 ] 

Jing Zhao commented on HDFS-5563:
-

bq. In the future, we can consider use fix-sized cache blocks so that it's easy 
to find the cached data for read.

Sounds good. +1 pending Jenkins.

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
 HDFS-5563.003.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834319#comment-13834319
 ] 

Hadoop QA commented on HDFS-5563:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616150/HDFS-5563.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5597//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5597//console

This message is automatically generated.

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
 HDFS-5563.003.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834363#comment-13834363
 ] 

Brandon Li commented on HDFS-5563:
--

Thank you, Jing. I've committed the patch.

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
 HDFS-5563.003.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5563:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
 HDFS-5563.003.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834368#comment-13834368
 ] 

Hudson commented on HDFS-5563:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4805 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4805/])
HDFS-5563. NFS gateway should commit the buffered data when read request comes 
after write to the same file. Contributed by Brandon Li (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1546233)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestWrites.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
 HDFS-5563.003.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5563) NFS gateway should commit the buffered data when read request comes after write to the same file

2013-11-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5563:
-

Fix Version/s: 2.2.1

 NFS gateway should commit the buffered data when read request comes after 
 write to the same file
 

 Key: HDFS-5563
 URL: https://issues.apache.org/jira/browse/HDFS-5563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.2.1

 Attachments: HDFS-5563.001.patch, HDFS-5563.002.patch, 
 HDFS-5563.003.patch


 HDFS write is asynchronous and data may not be available to read immediately 
 after write.
 One of the main reason is that DFSClient doesn't flush data to DN until its 
 local buffer is full.
 To workaround this problem, when a read comes after write to the same file, 
 NFS gateway should sync the data so the read request can get the latest 
 content. The drawback is that, the frequent hsync() call can slow down data 
 write.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5430:
--

Attachment: hdfs-5430-4.patch

Newly rebased patch, also addressed Colin's comments. TestOEV will still fail , 
but it passes locally.

 Support TTL on CacheBasedPathDirectives
 ---

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
 hdfs-5430-4.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should not depend on libhadoop.so

2013-11-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834404#comment-13834404
 ] 

Andrew Wang commented on HDFS-5562:
---

+1 LGTM, thanks Akira and Colin. Will commit shortly.

 TestCacheDirectives and TestFsDatasetCache should not depend on libhadoop.so
 

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5562.002.patch, HDFS-5562.3.patch, 
 HDFS-5562.v1.patch, 
 org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache-output.txt, 
 org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.txt


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should stub out native mlock

2013-11-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5562:
--

Summary: TestCacheDirectives and TestFsDatasetCache should stub out native 
mlock  (was: TestCacheDirectives and TestFsDatasetCache should not depend on 
libhadoop.so)

 TestCacheDirectives and TestFsDatasetCache should stub out native mlock
 ---

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5562.002.patch, HDFS-5562.3.patch, 
 HDFS-5562.v1.patch, 
 org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache-output.txt, 
 org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.txt


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5536) Implement HTTP policy for Namenode and DataNode

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834409#comment-13834409
 ] 

Hadoop QA commented on HDFS-5536:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616137/HDFS-5536.006.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestStorageRestore

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5596//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5596//console

This message is automatically generated.

 Implement HTTP policy for Namenode and DataNode
 ---

 Key: HDFS-5536
 URL: https://issues.apache.org/jira/browse/HDFS-5536
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5536.000.patch, HDFS-5536.001.patch, 
 HDFS-5536.002.patch, HDFS-5536.003.patch, HDFS-5536.004.patch, 
 HDFS-5536.005.patch, HDFS-5536.006.patch


 this jira implements the http and https policy in the namenode and the 
 datanode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should stub out native mlock

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834421#comment-13834421
 ] 

Hudson commented on HDFS-5562:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4806 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4806/])
HDFS-5562. TestCacheDirectives and TestFsDatasetCache should stub out native 
mlock. Contributed by Colin Patrick McCabe and Akira Ajisaka. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1546246)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


 TestCacheDirectives and TestFsDatasetCache should stub out native mlock
 ---

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5562.002.patch, HDFS-5562.3.patch, 
 HDFS-5562.v1.patch, 
 org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache-output.txt, 
 org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.txt


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834471#comment-13834471
 ] 

Colin Patrick McCabe commented on HDFS-5430:


Looks good.  +1 pending jenkins (and manual testing of TestOEV, since patch 
won't pick up the binary change as we know)

 Support TTL on CacheBasedPathDirectives
 ---

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
 hdfs-5430-4.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5562) TestCacheDirectives and TestFsDatasetCache should stub out native mlock

2013-11-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834472#comment-13834472
 ] 

Colin Patrick McCabe commented on HDFS-5562:


Thanks Akira and Andrew.

 TestCacheDirectives and TestFsDatasetCache should stub out native mlock
 ---

 Key: HDFS-5562
 URL: https://issues.apache.org/jira/browse/HDFS-5562
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5562.002.patch, HDFS-5562.3.patch, 
 HDFS-5562.v1.patch, 
 org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache-output.txt, 
 org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.txt


 Some tests fail on trunk.
 {code}
 Tests in error:
   TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start 
 datan...
   TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 
 » Runtime
   TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime 
 Cannot ...
   TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start 
 datanode ...
 Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
 {code}
 For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834488#comment-13834488
 ] 

Hadoop QA commented on HDFS-5430:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616162/hdfs-5430-4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
  org.apache.hadoop.hdfs.web.TestWebHDFS

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5598//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5598//console

This message is automatically generated.

 Support TTL on CacheBasedPathDirectives
 ---

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
 hdfs-5430-4.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5579) Under construction files make DataNode decommission take very long hours

2013-11-27 Thread zhaoyunjiong (JIRA)
zhaoyunjiong created HDFS-5579:
--

 Summary: Under construction files make DataNode decommission take 
very long hours
 Key: HDFS-5579
 URL: https://issues.apache.org/jira/browse/HDFS-5579
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0, 1.2.0
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong


We noticed that some times decommission DataNodes takes very long time, even 
exceeds 100 hours.
After check the code, I found that in 
BlockManager:computeReplicationWorkForBlocks(ListListBlock 
blocksToReplicate) it won't replicate blocks which belongs to under 
construction files, however in 
BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there  is 
block need replicate no matter whether it belongs to under construction or not, 
the decommission progress will continue running.
That's the reason some time the decommission takes very long time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5579) Under construction files make DataNode decommission take very long hours

2013-11-27 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-5579:
---

Attachment: HDFS-5579.patch
HDFS-5579-branch-1.2.patch

This patch let NameNode can replicate blocks belongs to under construction 
files but not the last block.
And if the decommissioning DataNodes only have some blocks which are the last 
blocks of under construction files and have more than 1 live replicates left 
behind, then NameNode could set it to DECOMMISSIONED.

 Under construction files make DataNode decommission take very long hours
 

 Key: HDFS-5579
 URL: https://issues.apache.org/jira/browse/HDFS-5579
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.2.0, 2.2.0
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-5579-branch-1.2.patch, HDFS-5579.patch


 We noticed that some times decommission DataNodes takes very long time, even 
 exceeds 100 hours.
 After check the code, I found that in 
 BlockManager:computeReplicationWorkForBlocks(ListListBlock 
 blocksToReplicate) it won't replicate blocks which belongs to under 
 construction files, however in 
 BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there  
 is block need replicate no matter whether it belongs to under construction or 
 not, the decommission progress will continue running.
 That's the reason some time the decommission takes very long time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheBasedPathDirectives

2013-11-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834551#comment-13834551
 ] 

Andrew Wang commented on HDFS-5430:
---

I ran the WebHDFS test locally a few times and it worked, it looks like a 
problem with the build host. The OEV test we also know will flake since it's 
changing a binary file.

Based on that, will commit shortly based on Colin's +1. Thanks for the review!

 Support TTL on CacheBasedPathDirectives
 ---

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
 hdfs-5430-4.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5430) Support TTL on CacheDirectives

2013-11-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5430:
--

Summary: Support TTL on CacheDirectives  (was: Support TTL on 
CacheBasedPathDirectives)

 Support TTL on CacheDirectives
 --

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
 hdfs-5430-4.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5430) Support TTL on CacheDirectives

2013-11-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5430:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk. I made sure to run the OEV test and fix up the editsStored 
files.

 Support TTL on CacheDirectives
 --

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Fix For: 3.0.0

 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
 hdfs-5430-4.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5430) Support TTL on CacheDirectives

2013-11-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13834558#comment-13834558
 ] 

Hudson commented on HDFS-5430:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4807 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4807/])
HDFS-5430. Support TTL on CacheDirectives. Contributed by Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1546301)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirective.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


 Support TTL on CacheDirectives
 --

 Key: HDFS-5430
 URL: https://issues.apache.org/jira/browse/HDFS-5430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
Priority: Minor
 Fix For: 3.0.0

 Attachments: hdfs-5430-1.patch, hdfs-5430-2.patch, hdfs-5430-3.patch, 
 hdfs-5430-4.patch


 It would be nice if CacheBasedPathDirectives would support an expiration 
 time, after which they would be automatically removed by the NameNode.  This 
 time would probably be in wall-block time for the convenience of system 
 administrators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)