[jira] [Commented] (HDFS-3286) When the threshold value for balancer is 0(zero) ,unexpected output is displayed

2012-04-27 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263404#comment-13263404
 ] 

Uma Maheswara Rao G commented on HDFS-3286:
---

Hi Ashish, Patch looks pretty good. One comment:

{code}
  try {
+ Balancer.Cli.parse(parameters);
+ fail(reason);
+ }
+ catch (IllegalArgumentException e) {
+ assertEquals(Number out of range: threshold = 0.0, 
e.getMessage());
+ }
+ parameters = new String[]{-threshold, 101};
+ try {
+ Balancer.Cli.parse(parameters);
+ fail(reason);
+ }
{code}

Patch contains tabs, could you please format propertly.

 When the threshold value for balancer is 0(zero) ,unexpected output is 
 displayed
 

 Key: HDFS-3286
 URL: https://issues.apache.org/jira/browse/HDFS-3286
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer
Affects Versions: 0.23.0
Reporter: J.Andreina
Assignee: Ashish Singhi
 Fix For: 0.24.0

 Attachments: HDFS-3286.patch, HDFS-3286.patch


 Replication factor =1
 Step 1: Start NN,DN1.write 4 GB of data
 Step 2: Start DN2
 Step 3: issue the balancer command(./hdfs balancer -threshold 0)
 The threshold parameter is a fraction in the range of (0%, 100%) with a 
 default value of 10%
 When the above scenario is executed the Source DN and Target DN is choosen 
 and the number of bytes to be moved from source to target DN is also 
 calculated .
 Then the balancer is exiting with the following message No block can be 
 moved. Exiting... which is not expected.
 {noformat}
 HOST-xx-xx-xx-xx:/home/Andreina/APril10/install/hadoop/namenode/bin # ./hdfs 
 balancer -threshold 0
 12/04/16 16:22:07 INFO balancer.Balancer: Using a threshold of 0.0
 12/04/16 16:22:07 INFO balancer.Balancer: namenodes = 
 [hdfs://HOST-xx-xx-xx-xx:9000]
 12/04/16 16:22:07 INFO balancer.Balancer: p = 
 Balancer.Parameters[BalancingPolicy.Node, threshold=0.0]
 Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
 Bytes Being Moved
 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/yy.yy.yy.yy:50176
 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/xx.xx.xx.xx:50010
 12/04/16 16:22:10 INFO balancer.Balancer: 1 over-utilized: 
 [Source[xx.xx.xx.xx:50010, utilization=7.212458091389678]]
 12/04/16 16:22:10 INFO balancer.Balancer: 1 underutilized: 
 [BalancerDatanode[yy.yy.yy.yy:50176, utilization=4.650670324367203E-5]]
 12/04/16 16:22:10 INFO balancer.Balancer: Need to move 1.77 GB to make the 
 cluster balanced.
 No block can be moved. Exiting...
 Balancing took 5.142 seconds
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-3157) Error in deleting block is keep on coming from DN even after the block report and directory scanning has happened

2012-04-27 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G reassigned HDFS-3157:
-

Assignee: Ashish Singhi

 Error in deleting block is keep on coming from DN even after the block report 
 and directory scanning has happened
 -

 Key: HDFS-3157
 URL: https://issues.apache.org/jira/browse/HDFS-3157
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: J.Andreina
Assignee: Ashish Singhi
 Fix For: 0.24.0

 Attachments: HDFS-3157.patch


 Cluster setup:
 1NN,Three DN(DN1,DN2,DN3),replication factor-2,dfs.blockreport.intervalMsec 
 300,dfs.datanode.directoryscan.interval 1
 step 1: write one file a.txt with sync(not closed)
 step 2: Delete the blocks in one of the datanode say DN1(from rbw) to which 
 replication happened.
 step 3: close the file.
 Since the replication factor is 2 the blocks are replicated to the other 
 datanode.
 Then at the NN side the following cmd is issued to DN from which the block is 
 deleted
 -
 {noformat}
 2012-03-19 13:41:36,905 INFO org.apache.hadoop.hdfs.StateChange: BLOCK 
 NameSystem.addToCorruptReplicasMap: duplicate requested for 
 blk_2903555284838653156 to add as corrupt on XX.XX.XX.XX by /XX.XX.XX.XX 
 because reported RBW replica with genstamp 1002 does not match COMPLETE 
 block's genstamp in block map 1003
 2012-03-19 13:41:39,588 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 Removing block blk_2903555284838653156_1003 from neededReplications as it has 
 enough replicas.
 {noformat}
 From the datanode side in which the block is deleted the following exception 
 occured
 {noformat}
 2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block blk_2903555284838653156_1003. 
 BlockInfo not found in volumeMap.
 2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
   at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:2061)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:581)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:545)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:690)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:522)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:662)
   at java.lang.Thread.run(Thread.java:619)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-404) Why open method in class DFSClient would compare old LocatedBlocks and new LocatedBlocks?

2012-04-27 Thread Li Junjun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263460#comment-13263460
 ] 

Li Junjun commented on HDFS-404:


I think there are two situations.
1,the file has been swapped with another file,we check the blockId,that's 
correct we throw Exceptions!
2,but if  the file has not been swapped but being appending,we should just 
check the blockId ,and should not care about the block's stamp , because in 
fact we got the right and updated block list , cause file in hdfs can't be 
truncate .

so how about we  do it like this ?

if ( oldIter.next().getBlock().getBlockId() != 
newIter.next().getBlock().getBlockId() ) {
throw new IOException(Blocklist for  + src +  has changed!);
}




 Why open method in class DFSClient would compare old LocatedBlocks and new 
 LocatedBlocks?
 -

 Key: HDFS-404
 URL: https://issues.apache.org/jira/browse/HDFS-404
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: qianyu
Assignee: Todd Lipcon
   Original Estimate: 168h
  Remaining Estimate: 168h

 This is in the package of org.apache.hadoop.hdfs, DFSClient.openInfo():
 if (locatedBlocks != null) {
 IteratorLocatedBlock oldIter = 
 locatedBlocks.getLocatedBlocks().iterator();
 IteratorLocatedBlock newIter = 
 newInfo.getLocatedBlocks().iterator();
 while (oldIter.hasNext()  newIter.hasNext()) {
   if (! oldIter.next().getBlock().equals(newIter.next().getBlock())) {
 throw new IOException(Blocklist for  + src +  has changed!);
   }
 }
   }
 Why we need compare old LocatedBlocks and new LocatedBlocks, and in what case 
 it happen?
 Why not this.locatedBlocks = newInfo directly?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-404) Why open method in class DFSClient would compare old LocatedBlocks and new LocatedBlocks?

2012-04-27 Thread Li Junjun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263482#comment-13263482
 ] 

Li Junjun commented on HDFS-404:


sorry , in 2 shold be   but has been appended


I think there are two situations.
1,the file has been swapped with another file,we check the blockId,that's 
correct we throw Exceptions!
2,but if the file has not been swapped but has been appended ,we should just 
check the blockId ,and should not care about the block's stamp , because in 
fact we got the right and updated block list , cause file in hdfs can't be 
truncate .

so how about we do it like this ?

if ( oldIter.next().getBlock().getBlockId() != 
newIter.next().getBlock().getBlockId() ) { throw new IOException(Blocklist for 
 + src +  has changed!); }


after all , between two calls to openInfo() the file can be swapped and then 
appending,so we should not ignore the under construction file.


 Why open method in class DFSClient would compare old LocatedBlocks and new 
 LocatedBlocks?
 -

 Key: HDFS-404
 URL: https://issues.apache.org/jira/browse/HDFS-404
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: qianyu
Assignee: Todd Lipcon
   Original Estimate: 168h
  Remaining Estimate: 168h

 This is in the package of org.apache.hadoop.hdfs, DFSClient.openInfo():
 if (locatedBlocks != null) {
 IteratorLocatedBlock oldIter = 
 locatedBlocks.getLocatedBlocks().iterator();
 IteratorLocatedBlock newIter = 
 newInfo.getLocatedBlocks().iterator();
 while (oldIter.hasNext()  newIter.hasNext()) {
   if (! oldIter.next().getBlock().equals(newIter.next().getBlock())) {
 throw new IOException(Blocklist for  + src +  has changed!);
   }
 }
   }
 Why we need compare old LocatedBlocks and new LocatedBlocks, and in what case 
 it happen?
 Why not this.locatedBlocks = newInfo directly?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3322) Update file context to use HdfsDataInputStream and HdfsDataOutputStream

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263628#comment-13263628
 ] 

Hudson commented on HDFS-3322:
--

Integrated in Hadoop-Hdfs-trunk #1027 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1027/])
HDFS-3322. Use HdfsDataInputStream and HdfsDataOutputStream in Hdfs. 
(Revision 1331114)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331114
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/resources/DatanodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestShortCircuitLocalRead.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteRead.java


 Update file context to use HdfsDataInputStream and HdfsDataOutputStream
 ---

 Key: HDFS-3322
 URL: https://issues.apache.org/jira/browse/HDFS-3322
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 2.0.0

 Attachments: h3322_20120425.patch, h3322_20120426.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3181) testHardLeaseRecoveryAfterNameNodeRestart fails when length before restart is 1 byte less than CRC chunk size

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263633#comment-13263633
 ] 

Hudson commented on HDFS-3181:
--

Integrated in Hadoop-Hdfs-trunk #1027 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1027/])
HDFS-3181. Fix a test case in TestLeaseRecovery2. (Revision 1331138)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331138
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java


 testHardLeaseRecoveryAfterNameNodeRestart fails when length before restart is 
 1 byte less than CRC chunk size
 -

 Key: HDFS-3181
 URL: https://issues.apache.org/jira/browse/HDFS-3181
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 2.0.0

 Attachments: TestLeaseRecovery2with1535.patch, h3181_20120425.patch, 
 h3181_20120426.patch, repro.txt, testOut.txt


 org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart
  seems to be failing intermittently on jenkins.
 {code}
 org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart
 Failing for the past 1 build (Since Failed#2163 )
 Took 8.4 sec.
 Error Message
 Lease mismatch on /hardLeaseRecovery owned by HDFS_NameNode but is accessed 
 by DFSClient_NONMAPREDUCE_1147689755_1  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2076)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2051)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:1983)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:492)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:311)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42604)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:417)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:891)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1661)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1657)  at 
 java.security.AccessController.doPrivileged(Native Method)  at 
 javax.security.auth.Subject.doAs(Subject.java:396)  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1655) 
 Stacktrace
 org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: Lease mismatch 
 on /hardLeaseRecovery owned by HDFS_NameNode but is accessed by 
 DFSClient_NONMAPREDUCE_1147689755_1
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2076)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2051)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:1983)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:492)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:311)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42604)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:417)
 ...
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at $Proxy15.getAdditionalDatanode(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:317)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:828)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 

[jira] [Commented] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263636#comment-13263636
 ] 

Hudson commented on HDFS-3222:
--

Integrated in Hadoop-Hdfs-trunk #1027 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1027/])
HDFS-3222. DFSInputStream#openInfo should not silently get the length as 0 
when locations length is zero for last partial block. Contributed by Uma 
Maheswara Rao G. (Revision 1331061)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331061
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileLengthOnClusterRestart.java


 DFSInputStream#openInfo should not silently get the length as 0 when 
 locations length is zero for last partial block.
 -

 Key: HDFS-3222
 URL: https://issues.apache.org/jira/browse/HDFS-3222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 1.0.3, 2.0.0, 3.0.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Fix For: 2.0.0, 3.0.0

 Attachments: HDFS-3222-Test.patch, HDFS-3222.patch, HDFS-3222.patch


 I have seen one situation with Hbase cluster.
 Scenario is as follows:
 1)1.5 blocks has been written and synced.
 2)Suddenly cluster has been restarted.
 Reader opened the file and trying to get the length., By this time partial 
 block contained DNs are not reported to NN. So, locations for this partial 
 block would be 0. In this case, DFSInputStream assumes that, 1 block size as 
 final size.
 But reader also assuming that, 1 block size is the final length and setting 
 his end marker. Finally reader ending up reading only partial data. Due to 
 this, HMaster could not replay the complete edits. 
 Actually this happend with 20 version. Looking at the code, same should 
 present in trunk as well.
 {code}
 int replicaNotFoundCount = locatedblock.getLocations().length;
 
 for(DatanodeInfo datanode : locatedblock.getLocations()) {
 ..
 ..
  // Namenode told us about these locations, but none know about the replica
 // means that we hit the race between pipeline creation start and end.
 // we require all 3 because some other exception could have happened
 // on a DN that has it.  we want to report that error
 if (replicaNotFoundCount == 0) {
   return 0;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-3334:
-

 Summary: ByteRangeInputStream leaks streams
 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


{{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3334:
--

Status: Patch Available  (was: Open)

 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3334:
--

Attachment: HDFS-3334.patch

Implement {{close}} and add a {{CLOSED}} state for the stream.  
{{getInputStream}} was broken into two methods to facilitate mockable testing.  
The resulting reduction in indentation makes the patch look much bigger than it 
really is.

 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3309) HttpFS (Hoop) chmod not supporting octal and sticky bit permissions

2012-04-27 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-3309:
-

Attachment: HDFS-3309.patch

re-uploading patch to see if jenkins decides to run

 HttpFS (Hoop) chmod not supporting octal and sticky bit permissions
 ---

 Key: HDFS-3309
 URL: https://issues.apache.org/jira/browse/HDFS-3309
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Romain Rigaux
Assignee: Alejandro Abdelnur
 Fix For: 2.0.0

 Attachments: HDFS-3309.patch, HDFS-3309.patch


 HttpFs supports only the permissions: [0-7][0-7][0-7]
 In order to be compatible with webhdfs in needs to understand octal and 
 sticky bit permissions (e.g. 0777, 01777...)
 Example of error:
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [01777], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}
 Works with WebHdfs:
 curl -L -X PUT 
 http://localhost:50070/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 echo $?
 0
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=99op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [99], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3322) Update file context to use HdfsDataInputStream and HdfsDataOutputStream

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263666#comment-13263666
 ] 

Hudson commented on HDFS-3322:
--

Integrated in Hadoop-Mapreduce-trunk #1062 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1062/])
HDFS-3322. Use HdfsDataInputStream and HdfsDataOutputStream in Hdfs. 
(Revision 1331114)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331114
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/resources/DatanodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadWhileWriting.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestShortCircuitLocalRead.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteRead.java


 Update file context to use HdfsDataInputStream and HdfsDataOutputStream
 ---

 Key: HDFS-3322
 URL: https://issues.apache.org/jira/browse/HDFS-3322
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 2.0.0

 Attachments: h3322_20120425.patch, h3322_20120426.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3181) testHardLeaseRecoveryAfterNameNodeRestart fails when length before restart is 1 byte less than CRC chunk size

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263670#comment-13263670
 ] 

Hudson commented on HDFS-3181:
--

Integrated in Hadoop-Mapreduce-trunk #1062 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1062/])
HDFS-3181. Fix a test case in TestLeaseRecovery2. (Revision 1331138)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331138
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java


 testHardLeaseRecoveryAfterNameNodeRestart fails when length before restart is 
 1 byte less than CRC chunk size
 -

 Key: HDFS-3181
 URL: https://issues.apache.org/jira/browse/HDFS-3181
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Colin Patrick McCabe
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 2.0.0

 Attachments: TestLeaseRecovery2with1535.patch, h3181_20120425.patch, 
 h3181_20120426.patch, repro.txt, testOut.txt


 org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart
  seems to be failing intermittently on jenkins.
 {code}
 org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart
 Failing for the past 1 build (Since Failed#2163 )
 Took 8.4 sec.
 Error Message
 Lease mismatch on /hardLeaseRecovery owned by HDFS_NameNode but is accessed 
 by DFSClient_NONMAPREDUCE_1147689755_1  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2076)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2051)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:1983)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:492)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:311)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42604)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:417)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:891)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1661)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1657)  at 
 java.security.AccessController.doPrivileged(Native Method)  at 
 javax.security.auth.Subject.doAs(Subject.java:396)  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1655) 
 Stacktrace
 org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: Lease mismatch 
 on /hardLeaseRecovery owned by HDFS_NameNode but is accessed by 
 DFSClient_NONMAPREDUCE_1147689755_1
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2076)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2051)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:1983)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:492)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:311)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42604)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:417)
 ...
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at $Proxy15.getAdditionalDatanode(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:317)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:828)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930)
   at 
 

[jira] [Commented] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263673#comment-13263673
 ] 

Hudson commented on HDFS-3222:
--

Integrated in Hadoop-Mapreduce-trunk #1062 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1062/])
HDFS-3222. DFSInputStream#openInfo should not silently get the length as 0 
when locations length is zero for last partial block. Contributed by Uma 
Maheswara Rao G. (Revision 1331061)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331061
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileLengthOnClusterRestart.java


 DFSInputStream#openInfo should not silently get the length as 0 when 
 locations length is zero for last partial block.
 -

 Key: HDFS-3222
 URL: https://issues.apache.org/jira/browse/HDFS-3222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 1.0.3, 2.0.0, 3.0.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Fix For: 2.0.0, 3.0.0

 Attachments: HDFS-3222-Test.patch, HDFS-3222.patch, HDFS-3222.patch


 I have seen one situation with Hbase cluster.
 Scenario is as follows:
 1)1.5 blocks has been written and synced.
 2)Suddenly cluster has been restarted.
 Reader opened the file and trying to get the length., By this time partial 
 block contained DNs are not reported to NN. So, locations for this partial 
 block would be 0. In this case, DFSInputStream assumes that, 1 block size as 
 final size.
 But reader also assuming that, 1 block size is the final length and setting 
 his end marker. Finally reader ending up reading only partial data. Due to 
 this, HMaster could not replay the complete edits. 
 Actually this happend with 20 version. Looking at the code, same should 
 present in trunk as well.
 {code}
 int replicaNotFoundCount = locatedblock.getLocations().length;
 
 for(DatanodeInfo datanode : locatedblock.getLocations()) {
 ..
 ..
  // Namenode told us about these locations, but none know about the replica
 // means that we hit the race between pipeline creation start and end.
 // we require all 3 because some other exception could have happened
 // on a DN that has it.  we want to report that error
 if (replicaNotFoundCount == 0) {
   return 0;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263709#comment-13263709
 ] 

Hadoop QA commented on HDFS-3334:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12524856/HDFS-3334.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2342//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2342//console

This message is automatically generated.

 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3309) HttpFS (Hoop) chmod not supporting octal and sticky bit permissions

2012-04-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263722#comment-13263722
 ] 

Hadoop QA commented on HDFS-3309:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12524859/HDFS-3309.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2343//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2343//console

This message is automatically generated.

 HttpFS (Hoop) chmod not supporting octal and sticky bit permissions
 ---

 Key: HDFS-3309
 URL: https://issues.apache.org/jira/browse/HDFS-3309
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Romain Rigaux
Assignee: Alejandro Abdelnur
 Fix For: 2.0.0

 Attachments: HDFS-3309.patch, HDFS-3309.patch


 HttpFs supports only the permissions: [0-7][0-7][0-7]
 In order to be compatible with webhdfs in needs to understand octal and 
 sticky bit permissions (e.g. 0777, 01777...)
 Example of error:
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [01777], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}
 Works with WebHdfs:
 curl -L -X PUT 
 http://localhost:50070/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 echo $?
 0
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=99op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [99], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-04-27 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-2617:
-

Attachment: HDFS-2617-trunk.patch

I've ported Jakob's patch to trunk. All testcases are passing. Still I have not 
tested it in a real deployment with security on.

 Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
 --

 Key: HDFS-2617
 URL: https://issues.apache.org/jira/browse/HDFS-2617
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch, 
 HDFS-2617-trunk.patch


 The current approach to secure and authenticate nn web services is based on 
 Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
 that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
 throughout.  This will simplify setup and configuration.  Also, Kerberized 
 SSL is a non-standard approach with its own quirks and dark corners 
 (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3309) HttpFS (Hoop) chmod not supporting octal and sticky bit permissions

2012-04-27 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-3309:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

committed to trunk and branch-2

 HttpFS (Hoop) chmod not supporting octal and sticky bit permissions
 ---

 Key: HDFS-3309
 URL: https://issues.apache.org/jira/browse/HDFS-3309
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Romain Rigaux
Assignee: Alejandro Abdelnur
 Fix For: 2.0.0

 Attachments: HDFS-3309.patch, HDFS-3309.patch


 HttpFs supports only the permissions: [0-7][0-7][0-7]
 In order to be compatible with webhdfs in needs to understand octal and 
 sticky bit permissions (e.g. 0777, 01777...)
 Example of error:
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [01777], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}
 Works with WebHdfs:
 curl -L -X PUT 
 http://localhost:50070/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 echo $?
 0
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=99op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [99], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3309) HttpFS (Hoop) chmod not supporting octal and sticky bit permissions

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263779#comment-13263779
 ] 

Hudson commented on HDFS-3309:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2219/])
HDFS-3309. HttpFS (Hoop) chmod not supporting octal and sticky bit 
permissions. (tucu) (Revision 1331493)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331493
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParams.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HttpFS (Hoop) chmod not supporting octal and sticky bit permissions
 ---

 Key: HDFS-3309
 URL: https://issues.apache.org/jira/browse/HDFS-3309
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Romain Rigaux
Assignee: Alejandro Abdelnur
 Fix For: 2.0.0

 Attachments: HDFS-3309.patch, HDFS-3309.patch


 HttpFs supports only the permissions: [0-7][0-7][0-7]
 In order to be compatible with webhdfs in needs to understand octal and 
 sticky bit permissions (e.g. 0777, 01777...)
 Example of error:
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [01777], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}
 Works with WebHdfs:
 curl -L -X PUT 
 http://localhost:50070/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 echo $?
 0
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=99op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [99], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3309) HttpFS (Hoop) chmod not supporting octal and sticky bit permissions

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263781#comment-13263781
 ] 

Hudson commented on HDFS-3309:
--

Integrated in Hadoop-Common-trunk-Commit #2145 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2145/])
HDFS-3309. HttpFS (Hoop) chmod not supporting octal and sticky bit 
permissions. (tucu) (Revision 1331493)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331493
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParams.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HttpFS (Hoop) chmod not supporting octal and sticky bit permissions
 ---

 Key: HDFS-3309
 URL: https://issues.apache.org/jira/browse/HDFS-3309
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Romain Rigaux
Assignee: Alejandro Abdelnur
 Fix For: 2.0.0

 Attachments: HDFS-3309.patch, HDFS-3309.patch


 HttpFs supports only the permissions: [0-7][0-7][0-7]
 In order to be compatible with webhdfs in needs to understand octal and 
 sticky bit permissions (e.g. 0777, 01777...)
 Example of error:
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [01777], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}
 Works with WebHdfs:
 curl -L -X PUT 
 http://localhost:50070/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 echo $?
 0
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=99op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [99], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3309) HttpFS (Hoop) chmod not supporting octal and sticky bit permissions

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263810#comment-13263810
 ] 

Hudson commented on HDFS-3309:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2162 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2162/])
HDFS-3309. HttpFS (Hoop) chmod not supporting octal and sticky bit 
permissions. (tucu) (Revision 1331493)

 Result = ABORTED
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331493
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParams.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HttpFS (Hoop) chmod not supporting octal and sticky bit permissions
 ---

 Key: HDFS-3309
 URL: https://issues.apache.org/jira/browse/HDFS-3309
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Romain Rigaux
Assignee: Alejandro Abdelnur
 Fix For: 2.0.0

 Attachments: HDFS-3309.patch, HDFS-3309.patch


 HttpFs supports only the permissions: [0-7][0-7][0-7]
 In order to be compatible with webhdfs in needs to understand octal and 
 sticky bit permissions (e.g. 0777, 01777...)
 Example of error:
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [01777], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}
 Works with WebHdfs:
 curl -L -X PUT 
 http://localhost:50070/webhdfs/v1/user/romain/test?permission=01777op=SETPERMISSIONuser.name=romain;
  
 echo $?
 0
 curl -L -X PUT 
 http://localhost:14000/webhdfs/v1/user/romain/test?permission=99op=SETPERMISSIONuser.name=romain;
  
 {RemoteException:{message:java.lang.IllegalArgumentException: Parameter 
 [permission], invalid value [99], value must be 
 [default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]],exception:QueryParamException,javaClassName:com.sun.jersey.api.ParamException$QueryParamException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-3269) End-to-end test for making a non-HA HDFS cluster HA-enabled

2012-04-27 Thread Mingjie Lai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingjie Lai reassigned HDFS-3269:
-

Assignee: (was: Mingjie Lai)

Sorry. Cannot work on it in short. Will come back to finish it if no one else 
takes it. 

 End-to-end test for making a non-HA HDFS cluster HA-enabled
 ---

 Key: HDFS-3269
 URL: https://issues.apache.org/jira/browse/HDFS-3269
 Project: Hadoop HDFS
  Issue Type: Test
  Components: ha, name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Priority: Minor

 Per Eli on HDFS-3259, it would be great if we had a test that did the 
 following:
 # Starts w/ non HA NN1
 # Shutdown, enable HA on NN1, add SBN NN2
 # Run initializeSharedEdits
 # Start and transition to active NN1
 # Run bootstrapStandby
 # Confirm NN1 and NN2 are up and HA

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3286) When the threshold value for balancer is 0(zero) ,unexpected output is displayed

2012-04-27 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263856#comment-13263856
 ] 

Aaron T. Myers commented on HDFS-3286:
--

Minor nit: Please also put catch clauses on the same line as the closing brace 
of the try block.

 When the threshold value for balancer is 0(zero) ,unexpected output is 
 displayed
 

 Key: HDFS-3286
 URL: https://issues.apache.org/jira/browse/HDFS-3286
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer
Affects Versions: 0.23.0
Reporter: J.Andreina
Assignee: Ashish Singhi
 Fix For: 0.24.0

 Attachments: HDFS-3286.patch, HDFS-3286.patch


 Replication factor =1
 Step 1: Start NN,DN1.write 4 GB of data
 Step 2: Start DN2
 Step 3: issue the balancer command(./hdfs balancer -threshold 0)
 The threshold parameter is a fraction in the range of (0%, 100%) with a 
 default value of 10%
 When the above scenario is executed the Source DN and Target DN is choosen 
 and the number of bytes to be moved from source to target DN is also 
 calculated .
 Then the balancer is exiting with the following message No block can be 
 moved. Exiting... which is not expected.
 {noformat}
 HOST-xx-xx-xx-xx:/home/Andreina/APril10/install/hadoop/namenode/bin # ./hdfs 
 balancer -threshold 0
 12/04/16 16:22:07 INFO balancer.Balancer: Using a threshold of 0.0
 12/04/16 16:22:07 INFO balancer.Balancer: namenodes = 
 [hdfs://HOST-xx-xx-xx-xx:9000]
 12/04/16 16:22:07 INFO balancer.Balancer: p = 
 Balancer.Parameters[BalancingPolicy.Node, threshold=0.0]
 Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
 Bytes Being Moved
 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/yy.yy.yy.yy:50176
 12/04/16 16:22:10 INFO net.NetworkTopology: Adding a new node: 
 /default-rack/xx.xx.xx.xx:50010
 12/04/16 16:22:10 INFO balancer.Balancer: 1 over-utilized: 
 [Source[xx.xx.xx.xx:50010, utilization=7.212458091389678]]
 12/04/16 16:22:10 INFO balancer.Balancer: 1 underutilized: 
 [BalancerDatanode[yy.yy.yy.yy:50176, utilization=4.650670324367203E-5]]
 12/04/16 16:22:10 INFO balancer.Balancer: Need to move 1.77 GB to make the 
 cluster balanced.
 No block can be moved. Exiting...
 Balancing took 5.142 seconds
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) Some admin methods in NN do not checkSuperuserPrivilege

2012-04-27 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263873#comment-13263873
 ] 

Daryn Sharp commented on HDFS-3331:
---

A few general, and probably trivial, questions:  Should 
{{checkSuperuserPrivilege}} be called before {{checkOperation}}? It seems more 
logical that a non-admin should be rejected immediately instead of sometimes 
seeing an error that an admin would see.

{{checkOperation}} for {{finalizeUpgrade}} is moved from {{NameNodeRpcServer}} 
into {{FSNamesystem}}.  On the surface, there doesn't appear to be consistency 
between where (rpc server or namesystem) the operation is checked.  Is it 
intentional for {{finalizeUpgrade}} to be different than the other methods 
changed in this patch?

Also involving {{FSNameSystem.finalizeUpgrade}}, I'm curious if there's a 
reason why the operation and admin checks are performed within the write lock?

 Some admin methods in NN do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3331_20120426.patch


 - setBalancerBandwidth and refreshNodes should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263876#comment-13263876
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3334:
--

{code}
  case SEEK:
if (in != null) {
  in.close();
}
in = openInputStream();
status = StreamStatus.NORMAL;
break;
{code}
I think the SEEK case can be simplified as above.  Otherwise, the are 
unnecessary status transitions: SEEK - CLOSE - SEEK - NORMAL.

 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) Some admin methods in NN do not checkSuperuserPrivilege

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263905#comment-13263905
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3331:
--

I agree that checkSuperuserPrivilege should be called before checkOperation but 
the current convention is the other way.  I recall one reason to put 
checkSuperuserPrivilege after safemode check is that checkSuperuserPrivilege is 
more expensive.

In the current code, it is inconsistent whether checkOperation is invoked in 
NameNodeRpcServer or FSNamesystem.  checkSuperuserPrivilege is usually invoked 
in FSNamesystem but an exception is DatanodeManager.refreshNode(Configuration). 
 It seems that FSNamesystem is the logical place to make the call since the 
checkXxx methods are declared in FSNamesystem.

Other operations in FSNamesystem such as setQuota and listCorruptFileBlocks 
call checkXxx within the lock.

I will post another patch for following the current convention as close as 
possible.

 Some admin methods in NN do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3331_20120426.patch


 - setBalancerBandwidth and refreshNodes should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3331) Some admin methods in NN do not checkSuperuserPrivilege

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3331:
-

Attachment: h3331_20120427.patch

h3331_20120427.patch: moves the checks to FSNamesystem.

 Some admin methods in NN do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3331_20120426.patch, h3331_20120427.patch


 - setBalancerBandwidth and refreshNodes should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3331:
-

Description: 
- setBalancerBandwidth should checkSuperuserPrivilege
- finalizeUpgrade should acquire the write lock.

  was:
- setBalancerBandwidth and refreshNodes should checkSuperuserPrivilege
- finalizeUpgrade should acquire the write lock.

Summary: setBalancerBandwidth do not checkSuperuserPrivilege  (was: 
Some admin methods in NN do not checkSuperuserPrivilege)

 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3331_20120426.patch, h3331_20120427.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3332) NullPointerException in DN when directoryscanner is trying to report bad blocks

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263911#comment-13263911
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3332:
--

This probably related to the recent DatanodeID changes: HDFS-3164, HDFS-3138, 
HDFS-3171, HDFS-3144, HDFS-3216.

 NullPointerException in DN when directoryscanner is trying to report bad 
 blocks
 ---

 Key: HDFS-3332
 URL: https://issues.apache.org/jira/browse/HDFS-3332
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 3.0.0
 Environment: HDFS
Reporter: amith
Assignee: amith
 Fix For: 3.0.0


 There is 1 NN and 1 DN (NN is started with HA conf)
 I corrupted 1 block and found 
 {code}
 2012-04-27 09:59:01,214 INFO  datanode.DataNode 
 (BPServiceActor.java:blockReport(401)) - BlockReport of 2 blocks took 0 msec 
 to generate and 5 msecs for RPC and NN processing
 2012-04-27 09:59:01,214 INFO  datanode.DataNode 
 (BPServiceActor.java:blockReport(420)) - sent block report, processed 
 command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@3b756db3
 2012-04-27 09:59:01,726 INFO  datanode.DirectoryScanner 
 (DirectoryScanner.java:scan(390)) - BlockPool 
 BP-2087868617-10.18.40.95-1335500488012 Total blocks: 2, missing metadata 
 files:0, missing block files:0, missing blocks in memory:0, mismatched 
 blocks:1
 2012-04-27 09:59:01,727 WARN  impl.FsDatasetImpl 
 (FsDatasetImpl.java:checkAndUpdate(1366)) - Updating size of block 
 -4466699320171028643 from 1024 to 1034
 2012-04-27 09:59:01,727 WARN  impl.FsDatasetImpl 
 (FsDatasetImpl.java:checkAndUpdate(1374)) - Reporting the block 
 blk_-4466699320171028643_1004 as corrupt due to length mismatch
 2012-04-27 09:59:01,728 DEBUG ipc.Client (Client.java:sendParam(807)) - IPC 
 Client (1957050620) connection to /10.18.40.95:8020 from root sending #257
 2012-04-27 09:59:01,730 DEBUG ipc.Client (Client.java:receiveResponse(848)) - 
 IPC Client (1957050620) connection to /10.18.40.95:8020 from root got value 
 #257
 2012-04-27 09:59:01,730 DEBUG ipc.ProtobufRpcEngine 
 (ProtobufRpcEngine.java:invoke(193)) - Call: reportBadBlocks 2
 2012-04-27 09:59:01,731 ERROR datanode.DirectoryScanner 
 (DirectoryScanner.java:run(288)) - Exception during DirectoryScanner 
 execution - will continue next cycle
 java.lang.NullPointerException
   at org.apache.hadoop.hdfs.protocol.DatanodeID.init(DatanodeID.java:66)
   at 
 org.apache.hadoop.hdfs.protocol.DatanodeInfo.init(DatanodeInfo.java:87)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reportBadBlocks(BPServiceActor.java:238)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.reportBadBlocks(BPOfferService.java:187)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.reportBadBlocks(DataNode.java:559)
   at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkAndUpdate(FsDatasetImpl.java:1377)
   at 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:318)
   at 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:284)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:619)
 {code}
 Here when Directory scanner is trying to report badblock we got a NPE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3334:
--

Attachment: HDFS-3334-1.patch

Sure, I simplified it.  I had originally done it that way, but thought it would 
be better to not directly manipulate the underlying stream outside of the 
method intended to do the delegation.

 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-3334-1.patch, HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263932#comment-13263932
 ] 

Daryn Sharp commented on HDFS-3331:
---

+1 Assuming you meant to remove {{getFSImage().finalizeUpgrade()}} from 
{{FSNameSystem#refreshNodes()}}

 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3331_20120426.patch, h3331_20120427.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263933#comment-13263933
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3334:
--

+1 patch looks good.  Since the change is minor, I will run some test and then 
commit the patch without waiting for Jerkins again.

 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-3334-1.patch, HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263936#comment-13263936
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3331:
--

The getFSImage().finalizeUpgrade() was original from 
FSNamesystem.finalizeUpgrade().  FSNamesystem.refreshNodes() is a new method.  
svn diff mixes the lines up.

 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3331_20120426.patch, h3331_20120427.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3335) check for edit log corruption at the end of the log

2012-04-27 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-3335:
--

 Summary: check for edit log corruption at the end of the log
 Key: HDFS-3335
 URL: https://issues.apache.org/jira/browse/HDFS-3335
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Even after encountering an OP_INVALID, we should check the end of the edit log 
to make sure that it contains no more edits.

This will catch things like rare race conditions or log corruptions that would 
otherwise remain undetected.  They will got from being silent data loss 
scenarios to being cases that we can detect and fix.

Using recovery mode, we can choose to ignore the end of the log if necessary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263941#comment-13263941
 ] 

Hudson commented on HDFS-3334:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2220 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2220/])
HDFS-3334. Fix ByteRangeInputStream stream leakage.  Contributed by Daryn 
Sharp (Revision 1331570)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331570
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestByteRangeInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpFileSystem.java


 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3

 Attachments: HDFS-3334-1.patch, HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263943#comment-13263943
 ] 

Hudson commented on HDFS-3334:
--

Integrated in Hadoop-Common-trunk-Commit #2146 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2146/])
HDFS-3334. Fix ByteRangeInputStream stream leakage.  Contributed by Daryn 
Sharp (Revision 1331570)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331570
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestByteRangeInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpFileSystem.java


 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3

 Attachments: HDFS-3334-1.patch, HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263958#comment-13263958
 ] 

Hadoop QA commented on HDFS-3331:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12524906/h3331_20120427.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2344//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2344//console

This message is automatically generated.

 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3331_20120426.patch, h3331_20120427.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263962#comment-13263962
 ] 

Hudson commented on HDFS-3334:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2163 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2163/])
HDFS-3334. Fix ByteRangeInputStream stream leakage.  Contributed by Daryn 
Sharp (Revision 1331570)

 Result = ABORTED
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331570
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ByteRangeInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestByteRangeInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpFileSystem.java


 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3

 Attachments: HDFS-3334-1.patch, HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3331:
-

Attachment: h3331_20120427_0.23.patch

@Robert, thanks for taking a look of the patch.  This is for 0.23.

h3331_20120427_0.23.patch 

 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3331_20120426.patch, h3331_20120427.patch, 
 h3331_20120427_0.23.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3334) ByteRangeInputStream leaks streams

2012-04-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263967#comment-13263967
 ] 

Hadoop QA commented on HDFS-3334:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12524909/HDFS-3334-1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2345//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2345//console

This message is automatically generated.

 ByteRangeInputStream leaks streams
 --

 Key: HDFS-3334
 URL: https://issues.apache.org/jira/browse/HDFS-3334
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3

 Attachments: HDFS-3334-1.patch, HDFS-3334.patch


 {{HftpFileSystem.ByteRangeInputStream}} does not implement {{close}} so it 
 leaks the underlying stream(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3331:
-

   Resolution: Fixed
Fix Version/s: 0.23.3
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed this.

 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.23.3

 Attachments: h3331_20120426.patch, h3331_20120427.patch, 
 h3331_20120427_0.23.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Hari Mankude (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HDFS-3293:
---

Target Version/s: 0.24.0
  Status: Patch Available  (was: Open)

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Hari Mankude (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HDFS-3293:
---

Attachment: hdfs-3293.patch

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263986#comment-13263986
 ] 

Robert Joseph Evans commented on HDFS-3331:
---

Thanks Nicholas!

 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.23.3

 Attachments: h3331_20120426.patch, h3331_20120427.patch, 
 h3331_20120427_0.23.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263985#comment-13263985
 ] 

Hudson commented on HDFS-3331:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2221/])
HDFS-3331. In namenode, check superuser privilege for setBalancerBandwidth 
and acquire the write lock for finalizeUpgrade. (Revision 1331598)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331598
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.23.3

 Attachments: h3331_20120426.patch, h3331_20120427.patch, 
 h3331_20120427_0.23.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Hari Mankude (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263987#comment-13263987
 ] 

Hari Mankude commented on HDFS-3293:


Changes are trivial. So test is not included.

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263988#comment-13263988
 ] 

Hudson commented on HDFS-3331:
--

Integrated in Hadoop-Common-trunk-Commit #2147 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2147/])
HDFS-3331. In namenode, check superuser privilege for setBalancerBandwidth 
and acquire the write lock for finalizeUpgrade. (Revision 1331598)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331598
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.23.3

 Attachments: h3331_20120426.patch, h3331_20120427.patch, 
 h3331_20120427_0.23.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-3205) testHANameNodesWithFederation is failing in trunk

2012-04-27 Thread Hari Mankude (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude resolved HDFS-3205.


Resolution: Duplicate

This is a dup of hdfs-2960

 testHANameNodesWithFederation is failing in trunk
 -

 Key: HDFS-3205
 URL: https://issues.apache.org/jira/browse/HDFS-3205
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor

 The test is failing with the error
 org.junit.ComparisonFailure: expected:ns1-nn1.example.com[]:8020 but 
 was:ns1-nn1.example.com[/50.28.50.93]:8020

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264000#comment-13264000
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3293:
--

- Since clusterID is a String, we need to call clusterID.equals(..).

- Add @Override for JournalInfo.toString().

- We need to also override hashcode().  Otherwise, it will have findbugs 
warnings.

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Hari Mankude (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HDFS-3293:
---

Attachment: hdfs-3293-1.patch

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293-1.patch, hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Hari Mankude (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264022#comment-13264022
 ] 

Hari Mankude commented on HDFS-3293:


Fixed all the issues mentioned by Nicholas.

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293-1.patch, hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3293:
-

Hadoop Flags: Reviewed

+1 patch looks good.

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293-1.patch, hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3331) setBalancerBandwidth do not checkSuperuserPrivilege

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264030#comment-13264030
 ] 

Hudson commented on HDFS-3331:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2164 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2164/])
HDFS-3331. In namenode, check superuser privilege for setBalancerBandwidth 
and acquire the write lock for finalizeUpgrade. (Revision 1331598)

 Result = ABORTED
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331598
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java


 setBalancerBandwidth do not checkSuperuserPrivilege
 ---

 Key: HDFS-3331
 URL: https://issues.apache.org/jira/browse/HDFS-3331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.23.3

 Attachments: h3331_20120426.patch, h3331_20120427.patch, 
 h3331_20120427_0.23.patch


 - setBalancerBandwidth should checkSuperuserPrivilege
 - finalizeUpgrade should acquire the write lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-3326) Even when dfs.support.append is set to true log message displays that the append is disabled

2012-04-27 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HDFS-3326:
-

Assignee: Matthew Jacobs

 Even when dfs.support.append is set to true log message displays that the 
 append is disabled
 

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie

 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264063#comment-13264063
 ] 

Hadoop QA commented on HDFS-3293:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12524928/hdfs-3293.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 9 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2346//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2346//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2346//console

This message is automatically generated.

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293-1.patch, hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3326) Even when dfs.support.append is set to true log message displays that the append is disabled

2012-04-27 Thread Matthew Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Jacobs updated HDFS-3326:
-

Status: Patch Available  (was: Open)

 Even when dfs.support.append is set to true log message displays that the 
 append is disabled
 

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie
 Attachments: hdfs-3326.txt


 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3326) Even when dfs.support.append is set to true log message displays that the append is disabled

2012-04-27 Thread Matthew Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Jacobs updated HDFS-3326:
-

Attachment: hdfs-3326.txt

Patch attached

 Even when dfs.support.append is set to true log message displays that the 
 append is disabled
 

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie
 Attachments: hdfs-3326.txt


 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3326) Append enabled log message uses the wrong variable

2012-04-27 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3326:
--

Summary: Append enabled log message uses the wrong variable  (was: Even 
when dfs.support.append is set to true log message displays that the append is 
disabled)

 Append enabled log message uses the wrong variable
 --

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie
 Attachments: hdfs-3326.txt


 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3326) Append enabled log message uses the wrong variable

2012-04-27 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264070#comment-13264070
 ] 

Eli Collins commented on HDFS-3326:
---

+1 looks good. I'll commit this w/o jenkins since it's a trivial change.

 Append enabled log message uses the wrong variable
 --

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie
 Attachments: hdfs-3326.txt


 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3326) Append enabled log message uses the wrong variable

2012-04-27 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3326:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2, thanks Matt!

 Append enabled log message uses the wrong variable
 --

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie
 Fix For: 2.0.0

 Attachments: hdfs-3326.txt


 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3326) Append enabled log message uses the wrong variable

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264087#comment-13264087
 ] 

Hudson commented on HDFS-3326:
--

Integrated in Hadoop-Common-trunk-Commit #2148 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2148/])
HDFS-3326. Append enabled log message uses the wrong variable. Contributed 
by Matthew Jacobs (Revision 1331626)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331626
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Append enabled log message uses the wrong variable
 --

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie
 Fix For: 2.0.0

 Attachments: hdfs-3326.txt


 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3326) Append enabled log message uses the wrong variable

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264100#comment-13264100
 ] 

Hudson commented on HDFS-3326:
--

Integrated in Hadoop-Hdfs-trunk-Commit # (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit//])
HDFS-3326. Append enabled log message uses the wrong variable. Contributed 
by Matthew Jacobs (Revision 1331626)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331626
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Append enabled log message uses the wrong variable
 --

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie
 Fix For: 2.0.0

 Attachments: hdfs-3326.txt


 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3326) Append enabled log message uses the wrong variable

2012-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264130#comment-13264130
 ] 

Hudson commented on HDFS-3326:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2165 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2165/])
HDFS-3326. Append enabled log message uses the wrong variable. Contributed 
by Matthew Jacobs (Revision 1331626)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1331626
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Append enabled log message uses the wrong variable
 --

 Key: HDFS-3326
 URL: https://issues.apache.org/jira/browse/HDFS-3326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0
Reporter: J.Andreina
Assignee: Matthew Jacobs
Priority: Trivial
  Labels: newbie
 Fix For: 2.0.0

 Attachments: hdfs-3326.txt


 dfs.support.append is set to true
 started NN in non-HA mode
 At the NN side log the append enable is set to false.
 This is because in code append enabled is set to HA enabled value.Since 
 Started NN in non-HA mode the value for append is false
 Code:
 =
 {noformat}
 this.supportAppends = conf.getBoolean(DFS_SUPPORT_APPEND_KEY, 
 DFS_SUPPORT_APPEND_DEFAULT);
   LOG.info(Append Enabled:  + haEnabled);{noformat}
 NN logs
 
 {noformat}
 2012-04-25 21:11:09,693 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
 2012-04-25 21:11:09,702 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: 
 false{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3335) check for edit log corruption at the end of the log

2012-04-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3335:
---

Attachment: HDFS-3335-b1.001.patch

 check for edit log corruption at the end of the log
 ---

 Key: HDFS-3335
 URL: https://issues.apache.org/jira/browse/HDFS-3335
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-3335-b1.001.patch


 Even after encountering an OP_INVALID, we should check the end of the edit 
 log to make sure that it contains no more edits.
 This will catch things like rare race conditions or log corruptions that 
 would otherwise remain undetected.  They will got from being silent data loss 
 scenarios to being cases that we can detect and fix.
 Using recovery mode, we can choose to ignore the end of the log if necessary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3335) check for edit log corruption at the end of the log

2012-04-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3335:
---

Status: Patch Available  (was: Open)

 check for edit log corruption at the end of the log
 ---

 Key: HDFS-3335
 URL: https://issues.apache.org/jira/browse/HDFS-3335
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-3335-b1.001.patch


 Even after encountering an OP_INVALID, we should check the end of the edit 
 log to make sure that it contains no more edits.
 This will catch things like rare race conditions or log corruptions that 
 would otherwise remain undetected.  They will got from being silent data loss 
 scenarios to being cases that we can detect and fix.
 Using recovery mode, we can choose to ignore the end of the log if necessary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3293) Implement equals for storageinfo and journainfo class.

2012-04-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264153#comment-13264153
 ] 

Hadoop QA commented on HDFS-3293:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12524935/hdfs-3293-1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 6 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2347//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2347//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2347//console

This message is automatically generated.

 Implement equals for storageinfo and journainfo class. 
 ---

 Key: HDFS-3293
 URL: https://issues.apache.org/jira/browse/HDFS-3293
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: hdfs-3293-1.patch, hdfs-3293.patch


 Implement equals for storageinfo and journalinfo class. Also journalinfo 
 class needs a toString() method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3335) check for edit log corruption at the end of the log

2012-04-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264173#comment-13264173
 ] 

Hadoop QA commented on HDFS-3335:
-

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12524954/HDFS-3335-b1.001.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2348//console

This message is automatically generated.

 check for edit log corruption at the end of the log
 ---

 Key: HDFS-3335
 URL: https://issues.apache.org/jira/browse/HDFS-3335
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-3335-b1.001.patch


 Even after encountering an OP_INVALID, we should check the end of the edit 
 log to make sure that it contains no more edits.
 This will catch things like rare race conditions or log corruptions that 
 would otherwise remain undetected.  They will got from being silent data loss 
 scenarios to being cases that we can detect and fix.
 Using recovery mode, we can choose to ignore the end of the log if necessary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira