[jira] [Commented] (HDFS-4533) start-dfs.sh ignored additional parameters besides -upgrade

2013-06-14 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683127#comment-13683127
 ] 

Fengdong Yu commented on HDFS-4533:
---

Hi Suresh,
I've send email to submit icla, Thanks.

 start-dfs.sh ignored additional parameters besides -upgrade
 ---

 Key: HDFS-4533
 URL: https://issues.apache.org/jira/browse/HDFS-4533
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.0.3-alpha
Reporter: Fengdong Yu
  Labels: patch
 Fix For: 2.1.0-beta

 Attachments: HDFS-4533.patch


 start-dfs.sh only takes -upgrade option and ignored others. 
 So If run the following command, it will ignore the clusterId option.
 start-dfs.sh -upgrade -clusterId 1234

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4905) Add appendToFile command to hdfs dfs

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683128#comment-13683128
 ] 

Hadoop QA commented on HDFS-4905:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587765/HDFS-4905.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4515//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4515//console

This message is automatically generated.

 Add appendToFile command to hdfs dfs
 --

 Key: HDFS-4905
 URL: https://issues.apache.org/jira/browse/HDFS-4905
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Attachments: HDFS-4905.patch


 A hdfs dfs -appendToFile... option would be quite useful for quick testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4752) TestRBWBlockInvalidation fails on Windows due to file locking

2013-06-14 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HDFS-4752:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Resolving this. The fix was already in both trunk and branch-2.

 TestRBWBlockInvalidation fails on Windows due to file locking
 -

 Key: HDFS-4752
 URL: https://issues.apache.org/jira/browse/HDFS-4752
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4752.1.patch, HDFS-4752.2.patch


 The test attempts to invalidate a block by deleting its block file and meta 
 file.  This happens while a datanode thread holds the files open for write.  
 On Windows, this causes a locking conflict, and the test fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3125) Add a service that enables JournalDaemon

2013-06-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683176#comment-13683176
 ] 

Suresh Srinivas commented on HDFS-3125:
---

Not sure what the confusion is. On trunk, I can see the file:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/journalservice/JournalService.java

 Add a service that enables JournalDaemon
 

 Key: HDFS-3125
 URL: https://issues.apache.org/jira/browse/HDFS-3125
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-3125.patch, HDFS-3125.patch, HDFS-3125.patch, 
 HDFS-3125.patch


 In this subtask, I plan to add JournalService. It will provide the following 
 functionality:
 # Starts RPC server with JournalProtocolService or uses the RPC server 
 provided and add JournalProtocol service. 
 # Registers with the namenode.
 # Receives JournalProtocol related requests and hands it to over to a 
 listener.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3125) Add a service that enables JournalDaemon

2013-06-14 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683202#comment-13683202
 ] 

Fengdong Yu commented on HDFS-3125:
---

bq. Not sure what the confusion is. On trunk, I can see the file:

yes, it is there now. maybe I missed something, hm..

 Add a service that enables JournalDaemon
 

 Key: HDFS-3125
 URL: https://issues.apache.org/jira/browse/HDFS-3125
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-3125.patch, HDFS-3125.patch, HDFS-3125.patch, 
 HDFS-3125.patch


 In this subtask, I plan to add JournalService. It will provide the following 
 functionality:
 # Starts RPC server with JournalProtocolService or uses the RPC server 
 provided and add JournalProtocol service. 
 # Registers with the namenode.
 # Receives JournalProtocol related requests and hands it to over to a 
 listener.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode

2013-06-14 Thread Lokesh Basu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683205#comment-13683205
 ] 

Lokesh Basu commented on HDFS-336:
--

Sir, I want to work on this. I'm currently trying to figure out the way to 
complete the task as you described in your first comment. Will let you know 
what I come up with.


 dfsadmin -report should report number of blocks from datanode
 -

 Key: HDFS-336
 URL: https://issues.apache.org/jira/browse/HDFS-336
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Lohit Vijayarenu
Priority: Minor
  Labels: newbie

 _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. 
 Number of blocks hosted by a datanode is a good info which should be included 
 in the report. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4904) Remove JournalService

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683215#comment-13683215
 ] 

Hadoop QA commented on HDFS-4904:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587769/HDFS-4904.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4516//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4516//console

This message is automatically generated.

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-06-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-4888:
---

Status: Open  (was: Patch Available)

Curiously, TestHASafeMode should break with this patch, but test-patch doesn't 
find that.

 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.4-alpha, 3.0.0, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4845) FSEditLogLoader gets NPE while accessing INodeMap in TestEditLogRace

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683281#comment-13683281
 ] 

Hudson commented on HDFS-4845:
--

Integrated in Hadoop-Yarn-trunk #240 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/240/])
HDFS-4845. FSNamesystem.deleteInternal should acquire write-lock before 
changing the inode map.  Contributed by Arpit Agarwal (Revision 1492941)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1492941
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 FSEditLogLoader gets NPE while accessing INodeMap in TestEditLogRace
 

 Key: HDFS-4845
 URL: https://issues.apache.org/jira/browse/HDFS-4845
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kihwal Lee
Assignee: Arpit Agarwal
Priority: Critical
 Fix For: 2.1.0-beta

 Attachments: HDFS-4845.001.patch, HDFS-4845.002.patch, 
 HDFS-4845.003.patch, HDFS-4845.004.patch, HDFS-4845.005.patch


 TestEditLogRace fails occasionally because it gets NPE from manipulating 
 INodeMap while loading edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4902) DFSClient.getSnapshotDiffReport should use string path rather than o.a.h.fs.Path

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683288#comment-13683288
 ] 

Hudson commented on HDFS-4902:
--

Integrated in Hadoop-Yarn-trunk #240 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/240/])
HDFS-4902. DFSClient#getSnapshotDiffReport should use string path rather 
than o.a.h.fs.Path. Contributed by Binglin Chang. (Revision 1492791)

 Result = SUCCESS
jing9 : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1492791
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java


 DFSClient.getSnapshotDiffReport should use string path rather than 
 o.a.h.fs.Path
 

 Key: HDFS-4902
 URL: https://issues.apache.org/jira/browse/HDFS-4902
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 2.1.0-beta
Reporter: Binglin Chang
Assignee: Binglin Chang
 Fix For: 2.1.0-beta

 Attachments: HDFS-4902.001.patch, HDFS-4902.patch


 {code}
 org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): Absolute 
 path required
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:641)
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:619)
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getINodesInPath4Write(INodeDirectory.java:362)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath4Write(FSDirectory.java:1648)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.diff(SnapshotManager.java:354)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getSnapshotDiffReport(FSNamesystem.java:6035)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getSnapshotDiffReport(NameNodeRpcServer.java:1172)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getSnapshotDiffReport(ClientNamenodeProtocolTranslatorPB.java:975)
   at 
 org.apache.hadoop.hdfs.DFSClient.getSnapshotDiffReport(DFSClient.java:2158)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getSnapshotDiffReport(DistributedFileSystem.java:990)
 {code}
 DistributedFileSystem.getSnapshotDiffReport use Path with scheme, so toString 
 will return path with scheme, e.g. hdfs://:8020/abc/
 But FSNamesystem only accept simple path, not whole URI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4902) DFSClient.getSnapshotDiffReport should use string path rather than o.a.h.fs.Path

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683376#comment-13683376
 ] 

Hudson commented on HDFS-4902:
--

Integrated in Hadoop-Mapreduce-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1457/])
HDFS-4902. DFSClient#getSnapshotDiffReport should use string path rather 
than o.a.h.fs.Path. Contributed by Binglin Chang. (Revision 1492791)

 Result = FAILURE
jing9 : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1492791
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java


 DFSClient.getSnapshotDiffReport should use string path rather than 
 o.a.h.fs.Path
 

 Key: HDFS-4902
 URL: https://issues.apache.org/jira/browse/HDFS-4902
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 2.1.0-beta
Reporter: Binglin Chang
Assignee: Binglin Chang
 Fix For: 2.1.0-beta

 Attachments: HDFS-4902.001.patch, HDFS-4902.patch


 {code}
 org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): Absolute 
 path required
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:641)
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:619)
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getINodesInPath4Write(INodeDirectory.java:362)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath4Write(FSDirectory.java:1648)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.diff(SnapshotManager.java:354)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getSnapshotDiffReport(FSNamesystem.java:6035)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getSnapshotDiffReport(NameNodeRpcServer.java:1172)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getSnapshotDiffReport(ClientNamenodeProtocolTranslatorPB.java:975)
   at 
 org.apache.hadoop.hdfs.DFSClient.getSnapshotDiffReport(DFSClient.java:2158)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getSnapshotDiffReport(DistributedFileSystem.java:990)
 {code}
 DistributedFileSystem.getSnapshotDiffReport use Path with scheme, so toString 
 will return path with scheme, e.g. hdfs://:8020/abc/
 But FSNamesystem only accept simple path, not whole URI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4845) FSEditLogLoader gets NPE while accessing INodeMap in TestEditLogRace

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683369#comment-13683369
 ] 

Hudson commented on HDFS-4845:
--

Integrated in Hadoop-Mapreduce-trunk #1457 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1457/])
HDFS-4845. FSNamesystem.deleteInternal should acquire write-lock before 
changing the inode map.  Contributed by Arpit Agarwal (Revision 1492941)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1492941
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 FSEditLogLoader gets NPE while accessing INodeMap in TestEditLogRace
 

 Key: HDFS-4845
 URL: https://issues.apache.org/jira/browse/HDFS-4845
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kihwal Lee
Assignee: Arpit Agarwal
Priority: Critical
 Fix For: 2.1.0-beta

 Attachments: HDFS-4845.001.patch, HDFS-4845.002.patch, 
 HDFS-4845.003.patch, HDFS-4845.004.patch, HDFS-4845.005.patch


 TestEditLogRace fails occasionally because it gets NPE from manipulating 
 INodeMap while loading edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4902) DFSClient.getSnapshotDiffReport should use string path rather than o.a.h.fs.Path

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683390#comment-13683390
 ] 

Hudson commented on HDFS-4902:
--

Integrated in Hadoop-Hdfs-trunk #1430 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1430/])
HDFS-4902. DFSClient#getSnapshotDiffReport should use string path rather 
than o.a.h.fs.Path. Contributed by Binglin Chang. (Revision 1492791)

 Result = FAILURE
jing9 : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1492791
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java


 DFSClient.getSnapshotDiffReport should use string path rather than 
 o.a.h.fs.Path
 

 Key: HDFS-4902
 URL: https://issues.apache.org/jira/browse/HDFS-4902
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 2.1.0-beta
Reporter: Binglin Chang
Assignee: Binglin Chang
 Fix For: 2.1.0-beta

 Attachments: HDFS-4902.001.patch, HDFS-4902.patch


 {code}
 org.apache.hadoop.ipc.RemoteException(java.lang.AssertionError): Absolute 
 path required
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathNames(INode.java:641)
   at 
 org.apache.hadoop.hdfs.server.namenode.INode.getPathComponents(INode.java:619)
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getINodesInPath4Write(INodeDirectory.java:362)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINodesInPath4Write(FSDirectory.java:1648)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.diff(SnapshotManager.java:354)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getSnapshotDiffReport(FSNamesystem.java:6035)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getSnapshotDiffReport(NameNodeRpcServer.java:1172)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getSnapshotDiffReport(ClientNamenodeProtocolTranslatorPB.java:975)
   at 
 org.apache.hadoop.hdfs.DFSClient.getSnapshotDiffReport(DFSClient.java:2158)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getSnapshotDiffReport(DistributedFileSystem.java:990)
 {code}
 DistributedFileSystem.getSnapshotDiffReport use Path with scheme, so toString 
 will return path with scheme, e.g. hdfs://:8020/abc/
 But FSNamesystem only accept simple path, not whole URI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4845) FSEditLogLoader gets NPE while accessing INodeMap in TestEditLogRace

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683383#comment-13683383
 ] 

Hudson commented on HDFS-4845:
--

Integrated in Hadoop-Hdfs-trunk #1430 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1430/])
HDFS-4845. FSNamesystem.deleteInternal should acquire write-lock before 
changing the inode map.  Contributed by Arpit Agarwal (Revision 1492941)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1492941
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 FSEditLogLoader gets NPE while accessing INodeMap in TestEditLogRace
 

 Key: HDFS-4845
 URL: https://issues.apache.org/jira/browse/HDFS-4845
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kihwal Lee
Assignee: Arpit Agarwal
Priority: Critical
 Fix For: 2.1.0-beta

 Attachments: HDFS-4845.001.patch, HDFS-4845.002.patch, 
 HDFS-4845.003.patch, HDFS-4845.004.patch, HDFS-4845.005.patch


 TestEditLogRace fails occasionally because it gets NPE from manipulating 
 INodeMap while loading edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-06-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-4888:
---

Attachment: HDFS-4888.patch

Fixing TestHASafeMode

 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.4-alpha, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-4888.patch, HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-06-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-4888:
---

Status: Patch Available  (was: Open)

 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.4-alpha, 3.0.0, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-4888.patch, HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4783) TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows

2013-06-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4783:


Hadoop Flags: Reviewed

 TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows
 

 Key: HDFS-4783
 URL: https://issues.apache.org/jira/browse/HDFS-4783
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4783.1.patch


 This test asserts that delegation tokens previously associated to a host with 
 a resolved IP address no longer match for selection when 
 hadoop.security.token.service.use_ip is set false.  The test assumes that 
 127.0.0.1 resolves to host name localhost.  On Windows, this is not the 
 case, and instead it resolves to 127.0.0.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4783) TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683520#comment-13683520
 ] 

Hudson commented on HDFS-4783:
--

Integrated in Hadoop-trunk-Commit #3922 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3922/])
HDFS-4783. TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens 
fails on Windows. Contributed by Chris Nauroth. (Revision 1493149)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493149
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java


 TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows
 

 Key: HDFS-4783
 URL: https://issues.apache.org/jira/browse/HDFS-4783
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4783.1.patch


 This test asserts that delegation tokens previously associated to a host with 
 a resolved IP address no longer match for selection when 
 hadoop.security.token.service.use_ip is set false.  The test assumes that 
 127.0.0.1 resolves to host name localhost.  On Windows, this is not the 
 case, and instead it resolves to 127.0.0.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4783) TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows

2013-06-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4783:


   Resolution: Fixed
Fix Version/s: 2.1.0-beta
   3.0.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2, and branch-2.1-beta.  Thanks to Chuan and 
Daryn for reviews.

[~daryn], let me take a stab at describing the underlying issue that you 
mentioned:

# {{NameNode#initialize}} starts initializing namenode from given 
{{Configuration}}.
# {{NameNodeRpcServer}} constructor builds RPC server, binding to address 
specified in {{Configuration}}.  Resulting address may differ from what is in 
{{Configuration}} if using any address (0.0.0.0) or ephemeral port.
# {{NameNodeRpcServer}} then sets value of address back in {{Configuration}} to 
the real resulting address.  By side effect, the forward lookup and then 
reverse lookup may result in an unexpected hostname in {{Configuration}}, if 
there are multiple names for the host (i.e. CNAME records).

Does this accurately describe what you had in mind?  If so, let me know, and 
I'll paste it into a new issue.

I'm not yet sure what we could do about it, but it might help to start tracking 
it.  I recently saw this behavior cause some confusion for a deployment that 
was resolving the NN to an unexpected address.

 TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows
 

 Key: HDFS-4783
 URL: https://issues.apache.org/jira/browse/HDFS-4783
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HDFS-4783.1.patch


 This test asserts that delegation tokens previously associated to a host with 
 a resolved IP address no longer match for selection when 
 hadoop.security.token.service.use_ip is set false.  The test assumes that 
 127.0.0.1 resolves to host name localhost.  On Windows, this is not the 
 case, and instead it resolves to 127.0.0.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4875) Add a test for testing snapshot file length

2013-06-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4875:


Affects Version/s: 2.1.0-beta
   3.0.0

 Add a test for testing snapshot file length
 ---

 Key: HDFS-4875
 URL: https://issues.apache.org/jira/browse/HDFS-4875
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: snapshots, test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Arpit Agarwal
Priority: Minor
 Attachments: HDFS-4875.001.patch


 Here is a test idea from Suresh:
 - A file was length x at the time of snapshot (say s1) creation
 - Later y bytes get added to the file through append
 When client gets block locations for the file from snapshot s1, the length it 
 know is x. We should make sure it cannot read beyond length x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4875) Add a test for testing snapshot file length

2013-06-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4875:


Attachment: HDFS-4875.001.patch

 Add a test for testing snapshot file length
 ---

 Key: HDFS-4875
 URL: https://issues.apache.org/jira/browse/HDFS-4875
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: snapshots, test
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Arpit Agarwal
Priority: Minor
 Attachments: HDFS-4875.001.patch


 Here is a test idea from Suresh:
 - A file was length x at the time of snapshot (say s1) creation
 - Later y bytes get added to the file through append
 When client gets block locations for the file from snapshot s1, the length it 
 know is x. We should make sure it cannot read beyond length x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4875) Add a test for testing snapshot file length

2013-06-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4875:


Status: Patch Available  (was: Open)

 Add a test for testing snapshot file length
 ---

 Key: HDFS-4875
 URL: https://issues.apache.org/jira/browse/HDFS-4875
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: snapshots, test
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Arpit Agarwal
Priority: Minor
 Attachments: HDFS-4875.001.patch


 Here is a test idea from Suresh:
 - A file was length x at the time of snapshot (say s1) creation
 - Later y bytes get added to the file through append
 When client gets block locations for the file from snapshot s1, the length it 
 know is x. We should make sure it cannot read beyond length x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683620#comment-13683620
 ] 

Hadoop QA commented on HDFS-4888:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587830/HDFS-4888.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4517//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4517//console

This message is automatically generated.

 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.4-alpha, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-4888.patch, HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4904) Remove JournalService

2013-06-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683628#comment-13683628
 ] 

Chris Nauroth commented on HDFS-4904:
-

{quote}
Not sure about this reference to journalservice in 
hadoop-hdfs-project/hadoop-hdfs/pom.xml.
{quote}

This is pre-compiling the JSP files under 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal into class files for 
the distro/deployment.  Those JSP files don't appear to be doing anything, so I 
think the whole directory can be removed as a part of this change, and we can 
remove the execution from pom.xml.

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4904) Remove JournalService

2013-06-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4904:


Attachment: HDFS-4904.patch

Thanks for the confirmation Chris. Updated the patch.

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4904) Remove JournalService

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683644#comment-13683644
 ] 

Hadoop QA commented on HDFS-4904:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587854/HDFS-4904.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4519//console

This message is automatically generated.

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4904) Remove JournalService

2013-06-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683647#comment-13683647
 ] 

Arpit Agarwal commented on HDFS-4904:
-

No new tests should be needed since we are just removing unused code.

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HDFS-3125) Add a service that enables JournalDaemon

2013-06-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683652#comment-13683652
 ] 

Suresh Srinivas edited comment on HDFS-3125 at 6/14/13 6:43 PM:


BTW I have filed HDFS-4904 to remove this code, which is no longer being used. 
So comment if you have any issues with the removal on that jira.

  was (Author: sureshms):
BTW I have filed, HDFS-4904 to remove this code, which is no longer being 
used. So comment if you have any issues with it on that jira.
  
 Add a service that enables JournalDaemon
 

 Key: HDFS-3125
 URL: https://issues.apache.org/jira/browse/HDFS-3125
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-3125.patch, HDFS-3125.patch, HDFS-3125.patch, 
 HDFS-3125.patch


 In this subtask, I plan to add JournalService. It will provide the following 
 functionality:
 # Starts RPC server with JournalProtocolService or uses the RPC server 
 provided and add JournalProtocol service. 
 # Registers with the namenode.
 # Receives JournalProtocol related requests and hands it to over to a 
 listener.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3125) Add a service that enables JournalDaemon

2013-06-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683652#comment-13683652
 ] 

Suresh Srinivas commented on HDFS-3125:
---

BTW I have filed, HDFS-4904 to remove this code, which is no longer being used. 
So comment if you have any issues with it on that jira.

 Add a service that enables JournalDaemon
 

 Key: HDFS-3125
 URL: https://issues.apache.org/jira/browse/HDFS-3125
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-3125.patch, HDFS-3125.patch, HDFS-3125.patch, 
 HDFS-3125.patch


 In this subtask, I plan to add JournalService. It will provide the following 
 functionality:
 # Starts RPC server with JournalProtocolService or uses the RPC server 
 provided and add JournalProtocol service. 
 # Registers with the namenode.
 # Receives JournalProtocol related requests and hands it to over to a 
 listener.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4875) Add a test for testing snapshot file length

2013-06-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683659#comment-13683659
 ] 

Jing Zhao commented on HDFS-4875:
-

Thanks for adding the test, Arpit! The patch looks good to me. Only one minor 
nit: the RANDOM field can be removed because it has not been used.

 Add a test for testing snapshot file length
 ---

 Key: HDFS-4875
 URL: https://issues.apache.org/jira/browse/HDFS-4875
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: snapshots, test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Arpit Agarwal
Priority: Minor
 Attachments: HDFS-4875.001.patch


 Here is a test idea from Suresh:
 - A file was length x at the time of snapshot (say s1) creation
 - Later y bytes get added to the file through append
 When client gets block locations for the file from snapshot s1, the length it 
 know is x. We should make sure it cannot read beyond length x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-06-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3934:
---

Fix Version/s: 3.0.0

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch, 
 HDFS-3934.003.patch, HDFS-3934.004.patch, HDFS-3934.005.patch, 
 HDFS-3934.006.patch, HDFS-3934.007.patch, HDFS-3934.008.patch, 
 HDFS-3934.010.patch, HDFS-3934.011.patch, HDFS-3934.012.patch, 
 HDFS-3934.013.patch, HDFS-3934.014.patch, HDFS-3934.015.patch, 
 HDFS-3934.016.patch, HDFS-3934.017.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4875) Add a test for testing snapshot file length

2013-06-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683677#comment-13683677
 ] 

Arpit Agarwal commented on HDFS-4875:
-

Fixed, thanks for looking again!

 Add a test for testing snapshot file length
 ---

 Key: HDFS-4875
 URL: https://issues.apache.org/jira/browse/HDFS-4875
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: snapshots, test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Arpit Agarwal
Priority: Minor
 Attachments: HDFS-4875.001.patch, HDFS-4875.002.patch


 Here is a test idea from Suresh:
 - A file was length x at the time of snapshot (say s1) creation
 - Later y bytes get added to the file through append
 When client gets block locations for the file from snapshot s1, the length it 
 know is x. We should make sure it cannot read beyond length x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4875) Add a test for testing snapshot file length

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683699#comment-13683699
 ] 

Hadoop QA commented on HDFS-4875:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587848/HDFS-4875.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4518//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4518//console

This message is automatically generated.

 Add a test for testing snapshot file length
 ---

 Key: HDFS-4875
 URL: https://issues.apache.org/jira/browse/HDFS-4875
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: snapshots, test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Arpit Agarwal
Priority: Minor
 Attachments: HDFS-4875.001.patch, HDFS-4875.002.patch


 Here is a test idea from Suresh:
 - A file was length x at the time of snapshot (say s1) creation
 - Later y bytes get added to the file through append
 When client gets block locations for the file from snapshot s1, the length it 
 know is x. We should make sure it cannot read beyond length x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4904) Remove JournalService

2013-06-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683720#comment-13683720
 ] 

Chris Nauroth commented on HDFS-4904:
-

Sorry, Arpit.  I think I led us astray.  Those JSP pages are still in use.  
Running hdfs journalnode launches {{JournalNode}}, which creates 
{{JournalNodeHttpServer}}, which creates an {{HttpServer}} with name journal. 
 This causes the {{HttpServer}} to load the JSP pages from webapps/journal.

I said earlier that these JSP pages don't do much, but there is one thing:

{code}
ba href=/logs/Logs/a/b
{code}

If a user looking for the journal node logs points a browser at the root, then 
they'll get a friendly link to the correct URL for logs.  Therefore, I think 
the JSP pages need to stay in place.  This means that your first patch is good. 
 So sorry for the churn.

+1 for the first patch.  I'll commit that soon.


 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4904) Remove JournalService

2013-06-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683725#comment-13683725
 ] 

Arpit Agarwal commented on HDFS-4904:
-

Thanks, good to learn!

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4904) Remove JournalService

2013-06-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4904:


 Target Version/s: 3.0.0
Affects Version/s: (was: 2.0.3-alpha)
   3.0.0

Setting Affects Version/s and Target Version/s to 3.0.0.  The unused code only 
exists in trunk.  It does not exist in branch-2 or branch-2.1-beta.  
(Presumably it was never merged from trunk.)

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-06-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3934:
---

Fix Version/s: 2.3.0
   2.1.0-beta

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta, 2.3.0

 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch, 
 HDFS-3934.003.patch, HDFS-3934.004.patch, HDFS-3934.005.patch, 
 HDFS-3934.006.patch, HDFS-3934.007.patch, HDFS-3934.008.patch, 
 HDFS-3934.010.patch, HDFS-3934.011.patch, HDFS-3934.012.patch, 
 HDFS-3934.013.patch, HDFS-3934.014.patch, HDFS-3934.015.patch, 
 HDFS-3934.016.patch, HDFS-3934.017.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-06-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3934:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to branch-2.1-beta and branch-2

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta, 2.3.0

 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch, 
 HDFS-3934.003.patch, HDFS-3934.004.patch, HDFS-3934.005.patch, 
 HDFS-3934.006.patch, HDFS-3934.007.patch, HDFS-3934.008.patch, 
 HDFS-3934.010.patch, HDFS-3934.011.patch, HDFS-3934.012.patch, 
 HDFS-3934.013.patch, HDFS-3934.014.patch, HDFS-3934.015.patch, 
 HDFS-3934.016.patch, HDFS-3934.017.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4904) Remove JournalService

2013-06-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4904:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk.  Arpit, thank you for the patch!

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4904) Remove JournalService

2013-06-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4904:


Hadoop Flags: Reviewed

 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4904) Remove JournalService

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683753#comment-13683753
 ] 

Hudson commented on HDFS-4904:
--

Integrated in Hadoop-trunk-Commit #3926 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3926/])
HDFS-4904. Remove JournalService. Contributed by Arpit Agarwal. (Revision 
1493235)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493235
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/journalservice
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/journalservice/TestJournalService.java


 Remove JournalService
 -

 Key: HDFS-4904
 URL: https://issues.apache.org/jira/browse/HDFS-4904
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4904.patch, HDFS-4904.patch


 JournalService class was added in HDFS-3099. Since it was not used in 
 HDFS-3077, which has JournalNodeRpcServer instead, I propose deleting this 
 dead code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4849) Idempotent create and append operations.

2013-06-14 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683763#comment-13683763
 ] 

Konstantin Shvachko commented on HDFS-4849:
---

The first formatting change is in the code adjacent to the one I modified. 
Couldn't resist aligning it correctly. The path goes  to branch-2 as well, so 
it will not lead to diversion in formatting.
The second is formally a replacement of a line with two spaces with a new 
method surrounded by empty lines, not just a whitespace change.
If you feel strong I can revert the alignment, but would rather commit as such.

Test failures were because of:
TestBlocksWithNotEnoughRacks - HDFS-3538
TestQuorumJournalManager - HDFS-4899


 Idempotent create and append operations.
 

 Key: HDFS-4849
 URL: https://issues.apache.org/jira/browse/HDFS-4849
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: idempotentCreate.patch, idempotentCreate.patch


 create, append and delete operations can be made idempotent. This will reduce 
 chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4849) Idempotent create and append operations.

2013-06-14 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683766#comment-13683766
 ] 

Konstantin Boudnik commented on HDFS-4849:
--

Nope, I am not opinionated. Go for it - it makes sense.

 Idempotent create and append operations.
 

 Key: HDFS-4849
 URL: https://issues.apache.org/jira/browse/HDFS-4849
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: idempotentCreate.patch, idempotentCreate.patch


 create, append and delete operations can be made idempotent. This will reduce 
 chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4849) Idempotent create and append operations.

2013-06-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683793#comment-13683793
 ] 

Suresh Srinivas commented on HDFS-4849:
---

Konstantin, I am still reviewing it. Some early comments:
# This will throw LeaseExpiredException... should be moved into try block, 
above the line that calls CheckLease
# LOG.info to indicate Retry of create or append could just be LOG.debug?
# Please add { } after if conditions per the coding convention


 Idempotent create and append operations.
 

 Key: HDFS-4849
 URL: https://issues.apache.org/jira/browse/HDFS-4849
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: idempotentCreate.patch, idempotentCreate.patch


 create, append and delete operations can be made idempotent. This will reduce 
 chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-06-14 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683796#comment-13683796
 ] 

Kousuke Saruta commented on HDFS-4888:
--

Ravi, I agree with you. I think it is important to print message properly in 
view of inspection or trouble shooting.

 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.4-alpha, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-4888.patch, HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4849) Idempotent create and append operations.

2013-06-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683807#comment-13683807
 ] 

Suresh Srinivas commented on HDFS-4849:
---

I plan on completing the review in a day or two. Can you please hold the commit?

 Idempotent create and append operations.
 

 Key: HDFS-4849
 URL: https://issues.apache.org/jira/browse/HDFS-4849
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: idempotentCreate.patch, idempotentCreate.patch


 create, append and delete operations can be made idempotent. This will reduce 
 chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4866) Protocol buffer support cannot compile under C

2013-06-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4866:


Attachment: HDFS-4866.002.patch

Rebasing patch.

 Protocol buffer support cannot compile under C
 --

 Key: HDFS-4866
 URL: https://issues.apache.org/jira/browse/HDFS-4866
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ralph Castain
Assignee: Arpit Agarwal
Priority: Blocker
 Attachments: HDFS-4866.002.patch, HDFS-4866.branch-2.001.patch, 
 HDFS-4866.trunk.001.patch, NamenodeProtocol.pb-c.c, NamenodeProtocol.pb-c.h, 
 pcreate.pl


 When compiling Hadoop's .proto descriptions for use in C, an error occurs 
 because one of the RPC's in NamenodeProtocol.proto is named register. This 
 name is a reserved word in languages such as C. When using the Java and C++ 
 languages, the name is hidden inside a class and therefore doesn't cause an 
 error. Unfortunately, that is not the case in non-class languages such as C.
 Note: generating the C translation of the .proto files requires installation 
 of the protobuf-c package from google:
 http://code.google.com/p/protobuf-c/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4849) Idempotent create and append operations.

2013-06-14 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683826#comment-13683826
 ] 

Konstantin Shvachko commented on HDFS-4849:
---

Sure.

 Idempotent create and append operations.
 

 Key: HDFS-4849
 URL: https://issues.apache.org/jira/browse/HDFS-4849
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: idempotentCreate.patch, idempotentCreate.patch


 create, append and delete operations can be made idempotent. This will reduce 
 chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4626) ClientProtocol#getLinkTarget should throw an exception for non-existent paths

2013-06-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4626:
--

Attachment: hdfs-4626-2.patch

Thanks for the review, Colin. Attaching new patch with unit tests for both of 
the exception cases.

 ClientProtocol#getLinkTarget should throw an exception for non-existent paths
 -

 Key: HDFS-4626
 URL: https://issues.apache.org/jira/browse/HDFS-4626
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: hadoop-9415-1.patch, hadoop-9415-2.patch, 
 hadoop-9415-3.patch, hdfs-4626-1.patch, hdfs-4626-2.patch


 {{HdfsFileStatus#getLinkTarget}} can throw a NPE in {{DFSUtil#bytes2String}} 
 if {{symlink}} is null. Better to instead return null and propagate this to 
 the client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4626) ClientProtocol#getLinkTarget should throw an exception for non-symlink and non-existent paths

2013-06-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4626:
--

Description: {{HdfsFileStatus#getLinkTarget}} can throw a NPE in 
{{DFSUtil#bytes2String}} if {{symlink}} is null.  (was: 
{{HdfsFileStatus#getLinkTarget}} can throw a NPE in {{DFSUtil#bytes2String}} if 
{{symlink}} is null. Better to instead return null and propagate this to the 
client.)
Summary: ClientProtocol#getLinkTarget should throw an exception for 
non-symlink and non-existent paths  (was: ClientProtocol#getLinkTarget should 
throw an exception for non-existent paths)

 ClientProtocol#getLinkTarget should throw an exception for non-symlink and 
 non-existent paths
 -

 Key: HDFS-4626
 URL: https://issues.apache.org/jira/browse/HDFS-4626
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: hadoop-9415-1.patch, hadoop-9415-2.patch, 
 hadoop-9415-3.patch, hdfs-4626-1.patch, hdfs-4626-2.patch


 {{HdfsFileStatus#getLinkTarget}} can throw a NPE in {{DFSUtil#bytes2String}} 
 if {{symlink}} is null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4871) Skip failing commons tests on Windows

2013-06-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-4871.
-

Resolution: Not A Problem

I am resolving this as most of the commons issues have now been fixed with the 
exception of HADOOP-9527.

 Skip failing commons tests on Windows
 -

 Key: HDFS-4871
 URL: https://issues.apache.org/jira/browse/HDFS-4871
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.1.0-beta


 This is a temporary fix proposed to get CI working. We will skip the 
 following failing tests on Windows:
 # -TestChRootedFs- - HADOOP-8957-
 # -TestFSMainOperationsLocalFileSystem- - HADOOP-8957
 # -TestFcCreateMkdirLocalFs- - HADOOP-8957
 # -TestFcMainOperationsLocalFs- - HADOOP-8957
 # -TestFcPermissionsLocalFs- - HADOOP-8957
 # TestLocalFSFileContextSymlink - HADOOP-9527
 # -TestLocalFileSystem- - HADOOP-9131
 # -TestShellCommandFencer- - HADOOP-9526
 # -TestSocketIOWithTimeout- - HADOOP-8982
 # -TestViewFsLocalFs- - HADOOP-8957 and HADOOP-8958
 # -TestViewFsTrash- - HADOOP-8957 and HADOOP-8958
 # -TestViewFsWithAuthorityLocalFs- - HADOOP-8957 and HADOOP-8958
 The tests will be re-enabled as we fix each. JIRAs for remaining failing 
 tests to follow soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4866) Protocol buffer support cannot compile under C

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683890#comment-13683890
 ] 

Hadoop QA commented on HDFS-4866:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587900/HDFS-4866.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4521//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4521//console

This message is automatically generated.

 Protocol buffer support cannot compile under C
 --

 Key: HDFS-4866
 URL: https://issues.apache.org/jira/browse/HDFS-4866
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ralph Castain
Assignee: Arpit Agarwal
Priority: Blocker
 Attachments: HDFS-4866.002.patch, HDFS-4866.branch-2.001.patch, 
 HDFS-4866.trunk.001.patch, NamenodeProtocol.pb-c.c, NamenodeProtocol.pb-c.h, 
 pcreate.pl


 When compiling Hadoop's .proto descriptions for use in C, an error occurs 
 because one of the RPC's in NamenodeProtocol.proto is named register. This 
 name is a reserved word in languages such as C. When using the Java and C++ 
 languages, the name is hidden inside a class and therefore doesn't cause an 
 error. Unfortunately, that is not the case in non-class languages such as C.
 Note: generating the C translation of the .proto files requires installation 
 of the protobuf-c package from google:
 http://code.google.com/p/protobuf-c/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4626) ClientProtocol#getLinkTarget should throw an exception for non-symlink and non-existent paths

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683949#comment-13683949
 ] 

Hadoop QA commented on HDFS-4626:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587911/hdfs-4626-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4522//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4522//console

This message is automatically generated.

 ClientProtocol#getLinkTarget should throw an exception for non-symlink and 
 non-existent paths
 -

 Key: HDFS-4626
 URL: https://issues.apache.org/jira/browse/HDFS-4626
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: hadoop-9415-1.patch, hadoop-9415-2.patch, 
 hadoop-9415-3.patch, hdfs-4626-1.patch, hdfs-4626-2.patch


 {{HdfsFileStatus#getLinkTarget}} can throw a NPE in {{DFSUtil#bytes2String}} 
 if {{symlink}} is null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4849) Idempotent create and append operations.

2013-06-14 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-4849:
--

Priority: Blocker  (was: Major)

 Idempotent create and append operations.
 

 Key: HDFS-4849
 URL: https://issues.apache.org/jira/browse/HDFS-4849
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
Priority: Blocker
 Attachments: idempotentCreate.patch, idempotentCreate.patch


 create, append and delete operations can be made idempotent. This will reduce 
 chances for a job or other app failures when NN fails over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4521) invalid network toploogies should not be cached

2013-06-14 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HDFS-4521:


Affects Version/s: 1.3.0

 invalid network toploogies should not be cached
 ---

 Key: HDFS-4521
 URL: https://issues.apache.org/jira/browse/HDFS-4521
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.1.0-beta, 1.3.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HDFS-4521.001.patch, HDFS-4521.002.patch, 
 HDFS-4521.005.patch, HDFS-4521.006.patch, HDFS-4521.008.patch


 When the network topology is invalid, the DataNode refuses to start with a 
 message such as this:
 {quote}
 org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.registerDatanode from 
 172.29.122.23:55886: error:
 org.apache.hadoop.net.NetworkTopology$InvalidTopologyException: Invalid 
 network topology. You cannot have a rack and a non-rack node at the same 
 level of the network topology.
 {quote}
 This is expected if you specify a topology file or script which puts leaf 
 nodes at two different depths.  However, one problem we have now is that this 
 incorrect topology is cached forever.  Once the NameNode sees it, this 
 DataNode can never be added to the cluster, since this exception will be 
 rethrown each time.  The NameNode will not check to see if the topology file 
 or script has changed.  We should clear the topology mappings when there is 
 an InvalidTopologyException, to prevent this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-4521) invalid network toploogies should not be cached

2013-06-14 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu reopened HDFS-4521:
-


Reopen this issue. We also have similar problem in Hadoop 1.0. More information 
can be found in HADOOP-9633. The code in 1.0 was quite different from 2.0, so a 
new patch may be needed to address the problem in branch-1. Thanks!

 invalid network toploogies should not be cached
 ---

 Key: HDFS-4521
 URL: https://issues.apache.org/jira/browse/HDFS-4521
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HDFS-4521.001.patch, HDFS-4521.002.patch, 
 HDFS-4521.005.patch, HDFS-4521.006.patch, HDFS-4521.008.patch


 When the network topology is invalid, the DataNode refuses to start with a 
 message such as this:
 {quote}
 org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.registerDatanode from 
 172.29.122.23:55886: error:
 org.apache.hadoop.net.NetworkTopology$InvalidTopologyException: Invalid 
 network topology. You cannot have a rack and a non-rack node at the same 
 level of the network topology.
 {quote}
 This is expected if you specify a topology file or script which puts leaf 
 nodes at two different depths.  However, one problem we have now is that this 
 incorrect topology is cached forever.  Once the NameNode sees it, this 
 DataNode can never be added to the cluster, since this exception will be 
 rethrown each time.  The NameNode will not check to see if the topology file 
 or script has changed.  We should clear the topology mappings when there is 
 an InvalidTopologyException, to prevent this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4906) HDFS Output streams should not accept writes after being closed

2013-06-14 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HDFS-4906:


 Summary: HDFS Output streams should not accept writes after being 
closed
 Key: HDFS-4906
 URL: https://issues.apache.org/jira/browse/HDFS-4906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


Currently if one closes an OutputStream obtained from FileSystem#create and 
then calls write(...) on that closed stream, the write will appear to succeed 
without error though no data will be written to HDFS. A subsequent call to 
close will also silently appear to succeed. We should make it so that attempts 
to write to closed streams fails fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4906) HDFS Output streams should not accept writes after being closed

2013-06-14 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4906:
-

Status: Patch Available  (was: Open)

 HDFS Output streams should not accept writes after being closed
 ---

 Key: HDFS-4906
 URL: https://issues.apache.org/jira/browse/HDFS-4906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-4906.patch


 Currently if one closes an OutputStream obtained from FileSystem#create and 
 then calls write(...) on that closed stream, the write will appear to succeed 
 without error though no data will be written to HDFS. A subsequent call to 
 close will also silently appear to succeed. We should make it so that 
 attempts to write to closed streams fails fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4906) HDFS Output streams should not accept writes after being closed

2013-06-14 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4906:
-

Attachment: HDFS-4906.patch

Here's a patch which addresses the issue by changing FSOutputSummer to check if 
the implementing stream is closed first before accepting any writes.

 HDFS Output streams should not accept writes after being closed
 ---

 Key: HDFS-4906
 URL: https://issues.apache.org/jira/browse/HDFS-4906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-4906.patch


 Currently if one closes an OutputStream obtained from FileSystem#create and 
 then calls write(...) on that closed stream, the write will appear to succeed 
 without error though no data will be written to HDFS. A subsequent call to 
 close will also silently appear to succeed. We should make it so that 
 attempts to write to closed streams fails fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4906) HDFS Output streams should not accept writes after being closed

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684006#comment-13684006
 ] 

Hadoop QA commented on HDFS-4906:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587936/HDFS-4906.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4523//console

This message is automatically generated.

 HDFS Output streams should not accept writes after being closed
 ---

 Key: HDFS-4906
 URL: https://issues.apache.org/jira/browse/HDFS-4906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-4906.patch


 Currently if one closes an OutputStream obtained from FileSystem#create and 
 then calls write(...) on that closed stream, the write will appear to succeed 
 without error though no data will be written to HDFS. A subsequent call to 
 close will also silently appear to succeed. We should make it so that 
 attempts to write to closed streams fails fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4906) HDFS Output streams should not accept writes after being closed

2013-06-14 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4906:
-

Attachment: HDFS-4906.patch

Whoops! Missed adding an implementation of the new abstract method to one of 
its subclasses. New patch should be good to go.

 HDFS Output streams should not accept writes after being closed
 ---

 Key: HDFS-4906
 URL: https://issues.apache.org/jira/browse/HDFS-4906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-4906.patch, HDFS-4906.patch


 Currently if one closes an OutputStream obtained from FileSystem#create and 
 then calls write(...) on that closed stream, the write will appear to succeed 
 without error though no data will be written to HDFS. A subsequent call to 
 close will also silently appear to succeed. We should make it so that 
 attempts to write to closed streams fails fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4818) several HDFS tests that attempt to make directories unusable do not work correctly on Windows

2013-06-14 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4818:
-

Priority: Minor  (was: Major)
Hadoop Flags: Reviewed

+1 patch looks good.

 several HDFS tests that attempt to make directories unusable do not work 
 correctly on Windows
 -

 Key: HDFS-4818
 URL: https://issues.apache.org/jira/browse/HDFS-4818
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HDFS-4818.1.patch


 Several tests set restrictive permissions on the name directories to simulate 
 disk failure and verify that the namenode can still function with one of 
 multiple name directories out of service.  These permissions do not restrict 
 access as expected on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4906) HDFS Output streams should not accept writes after being closed

2013-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684049#comment-13684049
 ] 

Hadoop QA commented on HDFS-4906:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12587940/HDFS-4906.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4524//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4524//console

This message is automatically generated.

 HDFS Output streams should not accept writes after being closed
 ---

 Key: HDFS-4906
 URL: https://issues.apache.org/jira/browse/HDFS-4906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.5-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-4906.patch, HDFS-4906.patch


 Currently if one closes an OutputStream obtained from FileSystem#create and 
 then calls write(...) on that closed stream, the write will appear to succeed 
 without error though no data will be written to HDFS. A subsequent call to 
 close will also silently appear to succeed. We should make it so that 
 attempts to write to closed streams fails fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4866) Protocol buffer support cannot compile under C

2013-06-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684050#comment-13684050
 ] 

Chris Nauroth commented on HDFS-4866:
-

+1 for the rebased patch.  I'm going to commit this.

 Protocol buffer support cannot compile under C
 --

 Key: HDFS-4866
 URL: https://issues.apache.org/jira/browse/HDFS-4866
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ralph Castain
Assignee: Arpit Agarwal
Priority: Blocker
 Attachments: HDFS-4866.002.patch, HDFS-4866.branch-2.001.patch, 
 HDFS-4866.trunk.001.patch, NamenodeProtocol.pb-c.c, NamenodeProtocol.pb-c.h, 
 pcreate.pl


 When compiling Hadoop's .proto descriptions for use in C, an error occurs 
 because one of the RPC's in NamenodeProtocol.proto is named register. This 
 name is a reserved word in languages such as C. When using the Java and C++ 
 languages, the name is hidden inside a class and therefore doesn't cause an 
 error. Unfortunately, that is not the case in non-class languages such as C.
 Note: generating the C translation of the .proto files requires installation 
 of the protobuf-c package from google:
 http://code.google.com/p/protobuf-c/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4866) Protocol buffer support cannot compile under C

2013-06-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13684055#comment-13684055
 ] 

Hudson commented on HDFS-4866:
--

Integrated in Hadoop-trunk-Commit #3931 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3931/])
HDFS-4866. Protocol buffer support cannot compile under C. Contributed by 
Arpit Agarwal. (Revision 1493300)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1493300
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamenodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/NamenodeProtocol.proto


 Protocol buffer support cannot compile under C
 --

 Key: HDFS-4866
 URL: https://issues.apache.org/jira/browse/HDFS-4866
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ralph Castain
Assignee: Arpit Agarwal
Priority: Blocker
 Attachments: HDFS-4866.002.patch, HDFS-4866.branch-2.001.patch, 
 HDFS-4866.trunk.001.patch, NamenodeProtocol.pb-c.c, NamenodeProtocol.pb-c.h, 
 pcreate.pl


 When compiling Hadoop's .proto descriptions for use in C, an error occurs 
 because one of the RPC's in NamenodeProtocol.proto is named register. This 
 name is a reserved word in languages such as C. When using the Java and C++ 
 languages, the name is hidden inside a class and therefore doesn't cause an 
 error. Unfortunately, that is not the case in non-class languages such as C.
 Note: generating the C translation of the .proto files requires installation 
 of the protobuf-c package from google:
 http://code.google.com/p/protobuf-c/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4866) Protocol buffer support cannot compile under C

2013-06-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4866:


Release Note: The Protocol Buffers definition of the inter-namenode 
protocol required a change for compatibility with compiled C clients.  This is 
a backwards-incompatible change.  A namenode prior to this change will not be 
able to communicate with a namenode after this change.
Hadoop Flags: Incompatible change,Reviewed

 Protocol buffer support cannot compile under C
 --

 Key: HDFS-4866
 URL: https://issues.apache.org/jira/browse/HDFS-4866
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ralph Castain
Assignee: Arpit Agarwal
Priority: Blocker
 Attachments: HDFS-4866.002.patch, HDFS-4866.branch-2.001.patch, 
 HDFS-4866.trunk.001.patch, NamenodeProtocol.pb-c.c, NamenodeProtocol.pb-c.h, 
 pcreate.pl


 When compiling Hadoop's .proto descriptions for use in C, an error occurs 
 because one of the RPC's in NamenodeProtocol.proto is named register. This 
 name is a reserved word in languages such as C. When using the Java and C++ 
 languages, the name is hidden inside a class and therefore doesn't cause an 
 error. Unfortunately, that is not the case in non-class languages such as C.
 Note: generating the C translation of the .proto files requires installation 
 of the protobuf-c package from google:
 http://code.google.com/p/protobuf-c/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4866) Protocol buffer support cannot compile under C

2013-06-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4866:


   Resolution: Fixed
Fix Version/s: 2.1.0-beta
   3.0.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2, and branch-2.1-beta.  Thanks to Arpit for 
coding the fix, and thanks to the numerous contributors who participated on 
discussion.

 Protocol buffer support cannot compile under C
 --

 Key: HDFS-4866
 URL: https://issues.apache.org/jira/browse/HDFS-4866
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ralph Castain
Assignee: Arpit Agarwal
Priority: Blocker
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HDFS-4866.002.patch, HDFS-4866.branch-2.001.patch, 
 HDFS-4866.trunk.001.patch, NamenodeProtocol.pb-c.c, NamenodeProtocol.pb-c.h, 
 pcreate.pl


 When compiling Hadoop's .proto descriptions for use in C, an error occurs 
 because one of the RPC's in NamenodeProtocol.proto is named register. This 
 name is a reserved word in languages such as C. When using the Java and C++ 
 languages, the name is hidden inside a class and therefore doesn't cause an 
 error. Unfortunately, that is not the case in non-class languages such as C.
 Note: generating the C translation of the .proto files requires installation 
 of the protobuf-c package from google:
 http://code.google.com/p/protobuf-c/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4866) Protocol buffer support cannot compile under C

2013-06-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4866:


Component/s: namenode

 Protocol buffer support cannot compile under C
 --

 Key: HDFS-4866
 URL: https://issues.apache.org/jira/browse/HDFS-4866
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ralph Castain
Assignee: Arpit Agarwal
Priority: Blocker
 Attachments: HDFS-4866.002.patch, HDFS-4866.branch-2.001.patch, 
 HDFS-4866.trunk.001.patch, NamenodeProtocol.pb-c.c, NamenodeProtocol.pb-c.h, 
 pcreate.pl


 When compiling Hadoop's .proto descriptions for use in C, an error occurs 
 because one of the RPC's in NamenodeProtocol.proto is named register. This 
 name is a reserved word in languages such as C. When using the Java and C++ 
 languages, the name is hidden inside a class and therefore doesn't cause an 
 error. Unfortunately, that is not the case in non-class languages such as C.
 Note: generating the C translation of the .proto files requires installation 
 of the protobuf-c package from google:
 http://code.google.com/p/protobuf-c/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira