[jira] [Created] (HDFS-3161) 20 Append: Excluded DN replica from recovery should be removed from DN.

2012-03-29 Thread suja s (Created) (JIRA)
20 Append: Excluded DN replica from recovery should be removed from DN.
---

 Key: HDFS-3161
 URL: https://issues.apache.org/jira/browse/HDFS-3161
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: suja s
Priority: Critical
 Fix For: 1.0.3


1) DN1-DN2-DN3 are in pipeline.
2) Client killed abruptly
3) one DN has restarted , say DN3
4) In DN3 info.wasRecoveredOnStartup() will be true
5) NN recovery triggered, DN3 skipped from recovery due to above check.
6) Now DN1, DN2 has blocks with generataion stamp 2 and DN3 has older 
generation stamp say 1 and also DN3 still has this block entry in ongoingCreates
7) as part of recovery file has closed and got only two live replicas ( from 
DN1 and DN2)
8) So, NN issued the command for replication. Now DN3 also has the replica with 
newer generation stamp.
9) Now DN3 contains 2 replicas on disk. and one entry in ongoing creates with 
referring to blocksBeingWritten directory.

When we call append/ leaseRecovery, it may again skip this node for that 
recovery as blockId entry still presents in ongoingCreates with startup 
recovery true.
It may keep continue this dance for evry recovery.
And this stale replica will not be cleaned untill we restart the cluster. 
Actual replica will be trasferred to this node only through replication process.

Also unnecessarily that replicated blocks will get invalidated after next 
recoveries


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3162) BlockMap's corruptNodes count and CorruptReplicas map count is not matching.

2012-03-29 Thread suja s (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241089#comment-13241089
 ] 

suja s commented on HDFS-3162:
--

Here is the grepped logs with the block:

11:35:02,926 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3912)) - 
BLOCK* NameSystem.addStoredBlock: blockMap updated: xx.xx.xx.55:50086 is added 
to blk_1332906029734_1719 size 30
11:37:00,818 INFO  hdfs.StateChange (FSNamesystem.java:processReport(3773)) - 
BLOCK* NameSystem.processReport: block blk_1332906029734_1490 on 
xx.xx.xx.55:50086 size 30 does not belong to any file.
11:37:00,818 INFO  hdfs.StateChange (FSNamesystem.java:addToInvalidates(2002)) 
- BLOCK* NameSystem.addToInvalidates: blk_1332906029734 to xx.xx.xx.55:50086
11:37:02,777 INFO  hdfs.StateChange 
(FSNamesystem.java:invalidateWorkForOneNode(3459)) - BLOCK* ask 
xx.xx.xx.55:50086 to delete  blk_1332906029758_1514 blk_1332906029734_1490 
blk_1332906029745_1501 blk_1332906029703_1459 blk_1332906029746_1502 
blk_1332906029704_1460 blk_1332906029693_1449 blk_1332906029761_1517
12:36:59,865 INFO  hdfs.StateChange 
(FSNamesystem.java:computeReplicationWorkForBlock(3321)) - BLOCK* ask 
xx.xx.xx.102:50086 to replicate blk_1332906029734_1719 to datanode(s) 
xx.xx.xx.102:50010
12:37:01,416 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3912)) - 
BLOCK* NameSystem.addStoredBlock: blockMap updated: xx.xx.xx.102:50010 is added 
to blk_1332906029734_1719 size 30
14:30:15,537 INFO  hdfs.StateChange 
(CorruptReplicasMap.java:addToCorruptReplicasMap(55)) - BLOCK 
NameSystem.addToCorruptReplicasMap: blk_1332906029734 added as corrupt on 
xx.xx.xx.55:50086 by /xx.xx.xx.55
14:30:15,537 INFO  hdfs.StateChange (FSNamesystem.java:invalidateBlock(2125)) - 
DIR* NameSystem.invalidateBlock: blk_1332906029734_1719 on xx.xx.xx.55:50086
14:30:15,537 INFO  hdfs.StateChange (FSNamesystem.java:addToInvalidates(2002)) 
- BLOCK* NameSystem.addToInvalidates: blk_1332906029734 to xx.xx.xx.55:50086
14:30:18,156 INFO  hdfs.StateChange 
(FSNamesystem.java:invalidateWorkForOneNode(3459)) - BLOCK* ask 
xx.xx.xx.55:50086 to delete  blk_1332906029734_1719
14:38:47,685 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1
14:44:34,542 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1
14:44:46,937 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1
14:45:15,794 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1
14:45:37,893 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1
14:53:43,656 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1
14:57:31,448 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1
15:17:22,642 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1
15:21:20,961 WARN  namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(1119)) - Inconsistent number of 
corrupt replicas for blk_1332906029734_1719blockMap has 0 but corrupt replicas 
map has 1

 BlockMap's corruptNodes count and CorruptReplicas map count is not matching.
 

 Key: HDFS-3162
 URL: https://issues.apache.org/jira/browse/HDFS-3162
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.0.0
Reporter: suja s
Priority: Minor
 Fix For: 1.0.3


 Even after invalidating the block, continuosly below log is coming
  
 Inconsistent number of corrupt replicas for blk_1332906029734_1719blockMap 
 has 0 but corrupt replicas map has 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more 

[jira] [Created] (HDFS-3162) BlockMap's corruptNodes count and CorruptReplicas map count is not matching.

2012-03-29 Thread suja s (Created) (JIRA)
BlockMap's corruptNodes count and CorruptReplicas map count is not matching.


 Key: HDFS-3162
 URL: https://issues.apache.org/jira/browse/HDFS-3162
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.0.0
Reporter: suja s
Priority: Minor
 Fix For: 1.0.3


Even after invalidating the block, continuosly below log is coming
 
Inconsistent number of corrupt replicas for blk_1332906029734_1719blockMap has 
0 but corrupt replicas map has 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3161) 20 Append: Excluded DN replica from recovery should be removed from DN.

2012-03-29 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241095#comment-13241095
 ] 

Uma Maheswara Rao G commented on HDFS-3161:
---

Thanks a lot Suja for reporting the issue.


 20 Append: Excluded DN replica from recovery should be removed from DN.
 ---

 Key: HDFS-3161
 URL: https://issues.apache.org/jira/browse/HDFS-3161
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: suja s
Priority: Critical
 Fix For: 1.0.3


 1) DN1-DN2-DN3 are in pipeline.
 2) Client killed abruptly
 3) one DN has restarted , say DN3
 4) In DN3 info.wasRecoveredOnStartup() will be true
 5) NN recovery triggered, DN3 skipped from recovery due to above check.
 6) Now DN1, DN2 has blocks with generataion stamp 2 and DN3 has older 
 generation stamp say 1 and also DN3 still has this block entry in 
 ongoingCreates
 7) as part of recovery file has closed and got only two live replicas ( from 
 DN1 and DN2)
 8) So, NN issued the command for replication. Now DN3 also has the replica 
 with newer generation stamp.
 9) Now DN3 contains 2 replicas on disk. and one entry in ongoing creates with 
 referring to blocksBeingWritten directory.
 When we call append/ leaseRecovery, it may again skip this node for that 
 recovery as blockId entry still presents in ongoingCreates with startup 
 recovery true.
 It may keep continue this dance for evry recovery.
 And this stale replica will not be cleaned untill we restart the cluster. 
 Actual replica will be trasferred to this node only through replication 
 process.
 Also unnecessarily that replicated blocks will get invalidated after next 
 recoveries

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-29 Thread Ashish Singhi (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HDFS-3119:


Attachment: HDFS-3119.patch

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Brandon Li
Priority: Minor
 Fix For: 0.24.0, 0.23.2

 Attachments: HDFS-3119.patch


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3161) 20 Append: Excluded DN replica from recovery should be removed from DN.

2012-03-29 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241105#comment-13241105
 ] 

Uma Maheswara Rao G commented on HDFS-3161:
---

Here actually DN3 has removed from the pipeline while recovery. But 
unfortunately replication block again went to DN3. So, the problem start as the 
entry presents in ongoingcreates. If the replication block goes to other node 
then, I think problem may not come.

One quick thought, 
 can't we remove onGoingCreates entry on TRANSFER_BLOCK command? Because, 
replication triggered means, that block must have been completed. So, ideally 
there should not be any entry in onGoingCreates. But only the problem is that 
stale block may not be cleaned until we restart the cluster. But that won't 
create any problem for further pipelines.

Please suggest if you have better solution.

 20 Append: Excluded DN replica from recovery should be removed from DN.
 ---

 Key: HDFS-3161
 URL: https://issues.apache.org/jira/browse/HDFS-3161
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: suja s
Priority: Critical
 Fix For: 1.0.3


 1) DN1-DN2-DN3 are in pipeline.
 2) Client killed abruptly
 3) one DN has restarted , say DN3
 4) In DN3 info.wasRecoveredOnStartup() will be true
 5) NN recovery triggered, DN3 skipped from recovery due to above check.
 6) Now DN1, DN2 has blocks with generataion stamp 2 and DN3 has older 
 generation stamp say 1 and also DN3 still has this block entry in 
 ongoingCreates
 7) as part of recovery file has closed and got only two live replicas ( from 
 DN1 and DN2)
 8) So, NN issued the command for replication. Now DN3 also has the replica 
 with newer generation stamp.
 9) Now DN3 contains 2 replicas on disk. and one entry in ongoing creates with 
 referring to blocksBeingWritten directory.
 When we call append/ leaseRecovery, it may again skip this node for that 
 recovery as blockId entry still presents in ongoingCreates with startup 
 recovery true.
 It may keep continue this dance for evry recovery.
 And this stale replica will not be cleaned untill we restart the cluster. 
 Actual replica will be trasferred to this node only through replication 
 process.
 Also unnecessarily that replicated blocks will get invalidated after next 
 recoveries

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-3162) BlockMap's corruptNodes count and CorruptReplicas map count is not matching.

2012-03-29 Thread Uma Maheswara Rao G (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G reassigned HDFS-3162:
-

Assignee: Uma Maheswara Rao G

 BlockMap's corruptNodes count and CorruptReplicas map count is not matching.
 

 Key: HDFS-3162
 URL: https://issues.apache.org/jira/browse/HDFS-3162
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.0.0
Reporter: suja s
Assignee: Uma Maheswara Rao G
Priority: Minor
 Fix For: 1.0.3


 Even after invalidating the block, continuosly below log is coming
  
 Inconsistent number of corrupt replicas for blk_1332906029734_1719blockMap 
 has 0 but corrupt replicas map has 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3162) BlockMap's corruptNodes count and CorruptReplicas map count is not matching.

2012-03-29 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241119#comment-13241119
 ] 

Harsh J commented on HDFS-3162:
---

Do you use append on 1.0? It would cause such a thing. Refrain from using 
append() on the 1.0/0.20-append release as its buggy in some ways.

Although thanks for opening this. We can investigate (but problem may already 
be fixed in 0.23+ HDFS)

 BlockMap's corruptNodes count and CorruptReplicas map count is not matching.
 

 Key: HDFS-3162
 URL: https://issues.apache.org/jira/browse/HDFS-3162
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.0.0
Reporter: suja s
Assignee: Uma Maheswara Rao G
Priority: Minor
 Fix For: 1.0.3


 Even after invalidating the block, continuosly below log is coming
  
 Inconsistent number of corrupt replicas for blk_1332906029734_1719blockMap 
 has 0 but corrupt replicas map has 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-29 Thread Ashish Singhi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241131#comment-13241131
 ] 

Ashish Singhi commented on HDFS-3119:
-

Hi Brandon
Can you assign this issue to me. As I am working on this many days.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Brandon Li
Priority: Minor
 Fix For: 0.24.0, 0.23.2

 Attachments: HDFS-3119.patch


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3157) Error in deleting block is keep on coming from DN even after the block report and directory scanning has happened

2012-03-29 Thread Ashish Singhi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241145#comment-13241145
 ] 

Ashish Singhi commented on HDFS-3157:
-

After deleting a block. The pipeline will update the gen stamp of the block say 
blk_blockId_1002 to blk_blockId_1003.
Then DN1 will mark the block with old gen stamp as corrupt. 
In BlockManager#processReportedBlock() storedBlock will get assigned to 
blk_blockId_1003 as blockMap is now updated with new gen stamp for this blockId 
and then it will ask DN1 to delete this blk_blockId_1003.
As DN1's volumeMap does not contain blk_blockId_1003. It will throw an 
exception. 

 Error in deleting block is keep on coming from DN even after the block report 
 and directory scanning has happened
 -

 Key: HDFS-3157
 URL: https://issues.apache.org/jira/browse/HDFS-3157
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: J.Andreina
 Fix For: 0.24.0


 Cluster setup:
 1NN,Three DN(DN1,DN2,DN3),replication factor-2,dfs.blockreport.intervalMsec 
 300,dfs.datanode.directoryscan.interval 1
 step 1: write one file a.txt with sync(not closed)
 step 2: Delete the blocks in one of the datanode say DN1(from rbw) to which 
 replication happened.
 step 3: close the file.
 Since the replication factor is 2 the blocks are replicated to the other 
 datanode.
 Then at the NN side the following cmd is issued to DN from which the block is 
 deleted
 -
 {noformat}
 2012-03-19 13:41:36,905 INFO org.apache.hadoop.hdfs.StateChange: BLOCK 
 NameSystem.addToCorruptReplicasMap: duplicate requested for 
 blk_2903555284838653156 to add as corrupt on XX.XX.XX.XX by /XX.XX.XX.XX 
 because reported RBW replica with genstamp 1002 does not match COMPLETE 
 block's genstamp in block map 1003
 2012-03-19 13:41:39,588 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 Removing block blk_2903555284838653156_1003 from neededReplications as it has 
 enough replicas.
 {noformat}
 From the datanode side in which the block is deleted the following exception 
 occured
 {noformat}
 2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block blk_2903555284838653156_1003. 
 BlockInfo not found in volumeMap.
 2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
   at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:2061)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:581)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:545)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:690)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:522)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:662)
   at java.lang.Thread.run(Thread.java:619)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-29 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241148#comment-13241148
 ] 

Uma Maheswara Rao G commented on HDFS-3119:
---

Brandon and Ashish, Thanks for your interest on this issue.

@Ashish,
 Looks code formatting of your patch is wrong. Could you please update the 
patch with correct formatting in your next version of patch?
 Also please take a look at http://wiki.apache.org/hadoop/HowToContribute.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Brandon Li
Priority: Minor
 Fix For: 0.24.0, 0.23.2

 Attachments: HDFS-3119.patch


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241215#comment-13241215
 ] 

Hudson commented on HDFS-3158:
--

Integrated in Hadoop-Mapreduce-trunk #1034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1034/])
HDFS-3158. LiveNodes member of NameNodeMXBean should list non-DFS used 
space and capacity per DN. Contributed by Aaron T. Myers. (Revision 1306635)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306635
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241218#comment-13241218
 ] 

Hudson commented on HDFS-3155:
--

Integrated in Hadoop-Mapreduce-trunk #1034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1034/])
HDFS-3155. Clean up FSDataset implemenation related code. (Revision 1306582)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306582
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HAStressTestHarness.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyIsHot.java


 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 2.0.0

 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241222#comment-13241222
 ] 

Hudson commented on HDFS-3160:
--

Integrated in Hadoop-Mapreduce-trunk #1034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1034/])
HDFS-3160. httpfs should exec catalina instead of forking it. Contributed 
by Roman Shaposhnik (Revision 1306665)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306665
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241219#comment-13241219
 ] 

Hudson commented on HDFS-3156:
--

Integrated in Hadoop-Mapreduce-trunk #1034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1034/])
HDFS-3156. TestDFSHAAdmin is failing post HADOOP-8202. Contributed by Aaron 
T. Myers. (Revision 1306517)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306517
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java


 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241223#comment-13241223
 ] 

Hudson commented on HDFS-3143:
--

Integrated in Hadoop-Mapreduce-trunk #1034 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1034/])
HDFS-3143. TestGetBlocks.testGetBlocks is failing. Contributed by Arpit 
Gupta. (Revision 1306542)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306542
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestGetBlocks.java


 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Arpit Gupta
 Fix For: 2.0.0

 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241240#comment-13241240
 ] 

Hudson commented on HDFS-3155:
--

Integrated in Hadoop-Hdfs-trunk #999 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/999/])
HDFS-3155. Clean up FSDataset implemenation related code. (Revision 1306582)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306582
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HAStressTestHarness.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyIsHot.java


 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 2.0.0

 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241237#comment-13241237
 ] 

Hudson commented on HDFS-3158:
--

Integrated in Hadoop-Hdfs-trunk #999 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/999/])
HDFS-3158. LiveNodes member of NameNodeMXBean should list non-DFS used 
space and capacity per DN. Contributed by Aaron T. Myers. (Revision 1306635)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306635
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241241#comment-13241241
 ] 

Hudson commented on HDFS-3156:
--

Integrated in Hadoop-Hdfs-trunk #999 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/999/])
HDFS-3156. TestDFSHAAdmin is failing post HADOOP-8202. Contributed by Aaron 
T. Myers. (Revision 1306517)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306517
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java


 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241245#comment-13241245
 ] 

Hudson commented on HDFS-3143:
--

Integrated in Hadoop-Hdfs-trunk #999 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/999/])
HDFS-3143. TestGetBlocks.testGetBlocks is failing. Contributed by Arpit 
Gupta. (Revision 1306542)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306542
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestGetBlocks.java


 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Arpit Gupta
 Fix For: 2.0.0

 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3139) Minor Datanode logging improvement

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241242#comment-13241242
 ] 

Hudson commented on HDFS-3139:
--

Integrated in Hadoop-Hdfs-trunk #999 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/999/])
HDFS-3139. Minor Datanode logging improvement. Contributed by Eli Collins 
(Revision 1306549)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306549
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSAddressConfig.java


 Minor Datanode logging improvement
 --

 Key: HDFS-3139
 URL: https://issues.apache.org/jira/browse/HDFS-3139
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.0

 Attachments: hdfs-3139.txt, hdfs-3139.txt


 - DatanodeInfo#getDatanodeReport should log its hostname, in addition to the 
 DNS lookup it does on its IP
 - Datanode should log the ipc/info/streaming servers its listening on at 
 startup at INFO level

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241243#comment-13241243
 ] 

Hudson commented on HDFS-3160:
--

Integrated in Hadoop-Hdfs-trunk #999 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/999/])
HDFS-3160. httpfs should exec catalina instead of forking it. Contributed 
by Roman Shaposhnik (Revision 1306665)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306665
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3121) hdfs tests for HADOOP-8014

2012-03-29 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HDFS-3121:
--

Status: Open  (was: Patch Available)

 hdfs tests for HADOOP-8014
 --

 Key: HDFS-3121
 URL: https://issues.apache.org/jira/browse/HDFS-3121
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: hdfs-3121.patch, hdfs-3121.patch, hdfs-3121.patch


 This JIRA is to write tests for viewing quota using viewfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3121) hdfs tests for HADOOP-8014

2012-03-29 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HDFS-3121:
--

Status: Open  (was: Patch Available)

 hdfs tests for HADOOP-8014
 --

 Key: HDFS-3121
 URL: https://issues.apache.org/jira/browse/HDFS-3121
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: hdfs-3121.patch, hdfs-3121.patch, hdfs-3121.patch


 This JIRA is to write tests for viewing quota using viewfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3137:
--

Attachment: hdfs-3137.txt

@Suresh, np, attaching the same patch but w/o removing the (now dead) field 
from LayoutVersion.

This patch is blocking HDFS-3138 and friends so  I'd like to get this in soon.

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241370#comment-13241370
 ] 

Hadoop QA commented on HDFS-3137:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520438/hdfs-3137.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 12 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.cli.TestHDFSCLI

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2117//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2117//console

This message is automatically generated.

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2800) HA: TestStandbyCheckpoints.testCheckpointCancellation is racy

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2800:
--

 Target Version/s: 2.0.0  (was: 0.24.0)
Affects Version/s: (was: 0.24.0)
   2.0.0

 HA: TestStandbyCheckpoints.testCheckpointCancellation is racy
 -

 Key: HDFS-2800
 URL: https://issues.apache.org/jira/browse/HDFS-2800
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, test
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Todd Lipcon

 TestStandbyCheckpoints.testCheckpointCancellation is racy, have seen the 
 following assert on line 212 fail:
 {code}
 assertTrue(StandbyCheckpointer.getCanceledCount()  0);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3126) Journal stream from the namenode to backup needs to have a timeout

2012-03-29 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HDFS-3126:
---

Affects Version/s: 0.24.0
   Status: Patch Available  (was: Open)

 Journal stream from the namenode to backup needs to have a timeout
 --

 Key: HDFS-3126
 URL: https://issues.apache.org/jira/browse/HDFS-3126
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3126) Journal stream from the namenode to backup needs to have a timeout

2012-03-29 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HDFS-3126:
---

Attachment: hdfs-3126.patch

 Journal stream from the namenode to backup needs to have a timeout
 --

 Key: HDFS-3126
 URL: https://issues.apache.org/jira/browse/HDFS-3126
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: hdfs-3126.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3119:
-

Assignee: Ashish Singhi

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 0.24.0, 0.23.2

 Attachments: HDFS-3119.patch


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241411#comment-13241411
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3119:
--

Ashish, thanks for working on this!  Assigned to you.  Please try to add a unit 
test.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 0.24.0, 0.23.2

 Attachments: HDFS-3119.patch


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241415#comment-13241415
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3120:
--

 Let's add a new dfs.support.hsync option ...

There is a little problem that sync in 1.x and hflush in 2.x are different.  
They different methods and slightly different semantic.  Do we really need this 
new option?  How about change dfs.support.append for only append.  Sync/hflush 
is always enabled.

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.hsync* option that enables working sync (which 
 is basically the current dfs.support.append flag modulo one place where it's 
 not referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241422#comment-13241422
 ] 

Eli Collins commented on HDFS-3120:
---

@Nicholas, mid air collision! What do you think of my last comment?

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3120:
--

 Description: 
The work on branch-20-append was to support *sync*, for durable HBase WALs, not 
*append*. The branch-20-append implementation is known to be buggy. There's 
been confusion about this, we often answer queries on the list [like 
this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to enable 
correct sync on branch-1 for HBase is to set dfs.support.append to true in your 
config, which has the side effect of enabling append (which we don't want to 
do).

Let's add a new *dfs.support.sync* option that enables working sync (which is 
basically the current dfs.support.append flag modulo one place where it's not 
referring to sync). For compatibility, if dfs.support.append is set, 
dfs.support.sync will be set as well. This way someone can enable sync for 
HBase and still keep the current behavior that if dfs.support.append is not set 
then an append operation will result in an IOE indicating append is not 
supported. We should do this on trunk as well, as there's no reason to conflate 
hsync and append with a single config even if append works.

  was:
The work on branch-20-append was to support *sync*, for durable HBase WALs, not 
*append*. The branch-20-append implementation is known to be buggy. There's 
been confusion about this, we often answer queries on the list [like 
this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to enable 
correct sync on branch-1 for HBase is to set dfs.support.append to true in your 
config, which has the side effect of enabling append (which we don't want to 
do).

Let's add a new *dfs.support.hsync* option that enables working sync (which is 
basically the current dfs.support.append flag modulo one place where it's not 
referring to sync). For compatibility, if dfs.support.append is set, 
dfs.support.sync will be set as well. This way someone can enable sync for 
HBase and still keep the current behavior that if dfs.support.append is not set 
then an append operation will result in an IOE indicating append is not 
supported. We should do this on trunk as well, as there's no reason to conflate 
hsync and append with a single config even if append works.

Target Version/s: 1.1.0, 2.0.0  (was: 2.0.0, 1.1.0)

For 1.x:
- Add a dfs.support.sync option and enable it by default

For 2.x:
- Make hsync/hflush behavior independent of whether dfs.support.appends is 
enabled, ie you can turn append off and hsync/hflush still work

Note this does not add a dfs.support.sync option to 2.x This would be useful to 
people who want to disable both append and sync, eg to make a couple paths 
cheaper. I think we should not add this option unless it's requested / needed 
in the future. Note that this code path is already enabled by default, so the 
change is just that people can currently disable all sync/append code paths by 
disabling append, now they would just be disabling the append-specific code 
paths.

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241439#comment-13241439
 ] 

Eli Collins commented on HDFS-3120:
---

@Nicholas, I'm OK always enabling the sync paths on 1.X as well, currently 
they're not enabled by default because append is not enabled by default. 

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241441#comment-13241441
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3119:
--

Instead of casting numExpectedReplicas to short, could you change the 
declarations to short?  i.e.

{code}
+++ 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
   (working copy)
@@ -2767,7 +2767,7 @@
 }
   }
 
-  public void checkReplication(Block block, int numExpectedReplicas) {
+  public void checkReplication(Block block, short numExpectedReplicas) {
 // filter out containingNodes that are marked for decommission.
 NumberReplicas number = countNodes(block);
 if (isNeededReplication(block, numExpectedReplicas, 
number.liveReplicas())) { 
===
--- 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
  (revision 1306664)
+++ 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
  (working copy)
@@ -2146,7 +2146,7 @@
* replication factor, then insert them into neededReplication
*/
   private void checkReplicationFactor(INodeFile file) {
-int numExpectedReplicas = file.getReplication();
+short numExpectedReplicas = file.getReplication();
 Block[] pendingBlocks = file.getBlocks();
 int nrBlocks = pendingBlocks.length;
 for (int i = 0; i  nrBlocks; i++) {
{code}


 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Ashish Singhi
Priority: Minor
 Fix For: 0.24.0, 0.23.2

 Attachments: HDFS-3119.patch


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241448#comment-13241448
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3120:
--

 @Nicholas, I'm OK always enabling the sync paths on 1.X as well, ...

That's great!  Thanks.

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241465#comment-13241465
 ] 

Eli Collins commented on HDFS-3120:
---

Forgot to mention, given that we know there are data loss issues with append, 
why don't we remove the ability to enable it at the same time? It's an 
incompatible change, however there's also no reason in letting users shoot 
themselves in the foot.

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241469#comment-13241469
 ] 

Eli Collins commented on HDFS-3120:
---

This obviously only pertains to Hadoop 1.x btw.

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3126) Journal stream from the namenode to backup needs to have a timeout

2012-03-29 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241498#comment-13241498
 ] 

Hadoop QA commented on HDFS-3126:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520459/hdfs-3126.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.cli.TestHDFSCLI

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2118//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2118//console

This message is automatically generated.

 Journal stream from the namenode to backup needs to have a timeout
 --

 Key: HDFS-3126
 URL: https://issues.apache.org/jira/browse/HDFS-3126
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: hdfs-3126.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: HDFS portion of ZK-based FailoverController

2012-03-29 Thread Bikas Saha (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241500#comment-13241500
 ] 

Bikas Saha commented on HDFS-2185:
--

I am surprised that only an inElection state suffices instead of an 
inElectionActive and inElectionStandby. Arent there actions that need to be 
performed differently when the FC is active or standby?

There is no state transition dependent on the result of transitionToActive(). 
If transitionToActive on NN fails then FC should quitElection IMO. Currently, 
it quits election only on HM events. Same for transitionToStandby on NN. If 
that fails then should we not do something?

Which state is performing fencing? The state machine does not show fencing? Is 
it happening in the the HAService?

IMO - this state diagram has to be more clear about handling the 
success/failure of each operation. That is key to determining the robustness of 
FC. FC needs to be super robust by design right?

 HA: HDFS portion of ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: auto-failover, ha
Affects Versions: 0.24.0, 0.23.3
Reporter: Eli Collins
Assignee: Todd Lipcon
 Attachments: Failover_Controller.jpg, hdfs-2185.txt, hdfs-2185.txt, 
 hdfs-2185.txt, hdfs-2185.txt, zkfc-design.pdf, zkfc-design.pdf, 
 zkfc-design.tex


 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3126) Journal stream from the namenode to backup needs to have a timeout

2012-03-29 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241520#comment-13241520
 ] 

Hari Mankude commented on HDFS-3126:


I don't think testhdfscli failure is related to the patch.

 Journal stream from the namenode to backup needs to have a timeout
 --

 Key: HDFS-3126
 URL: https://issues.apache.org/jira/browse/HDFS-3126
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: hdfs-3126.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-29 Thread Brandon Li (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-3142:
-

Attachment: HDFS-3142.patch

 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker
 Attachments: HDFS-3142.patch


 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3121) hdfs tests for HADOOP-8014

2012-03-29 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HDFS-3121:
--

Status: Patch Available  (was: Open)

 hdfs tests for HADOOP-8014
 --

 Key: HDFS-3121
 URL: https://issues.apache.org/jira/browse/HDFS-3121
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: hdfs-3121.patch, hdfs-3121.patch, hdfs-3121.patch


 This JIRA is to write tests for viewing quota using viewfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3137:
-

Component/s: name-node

Nice cleanup!  Some comments:
- Could you keep and revise the following comments?  Othwerwise, it is not easy 
to see what the code is doing.
{code}
//DataStorage
-  //If upgrading from a relatively new version, we only need to create
-  //links with the same filename.  This can be done in bulk (much faster).
{code}
{code}
//FSImageSerialization
-// These locations are not used at all
{code}

- Should OP_DATANODE_ADD, OP_DATANODE_REMOVE and the classes be removed?

- Replace
{code}
//FSImageFormat
+PermissionStatus permissions = namesystem.getUpgradePermission();
+permissions = PermissionStatus.read(in);
{code}
with
{code}
+PermissionStatus permissions = PermissionStatus.read(in);
{code}
remove
{code}
//FSEditLogLoader
PermissionStatus permissions = fsNamesys.getUpgradePermission();
if (addCloseOp.permissions != null) {
  permissions = addCloseOp.permissions;
}
{code}
and then remove FSNamesystem.getUpgradePermission().

- FSImageFormat.readNumFiles(..) only has one line.  Let's remove it?
{code}
 private long readNumFiles(DataInputStream in)
 throws IOException {
-  int imgVersion = getLayoutVersion();
-
-  if (LayoutVersion.supports(Feature.NAMESPACE_QUOTA, imgVersion)) {
-return in.readLong();
-  } else {
-return in.readInt();
-  }
+  return in.readLong();
 }
{code}

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3142:
-

Status: Patch Available  (was: Open)

 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker
 Attachments: HDFS-3142.patch


 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3050) refactor OEV to share more code with the NameNode

2012-03-29 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3050:
---

Attachment: HDFS-3050.015.patch

* set 2-space indentation in output XML to match old XML

* add Javadoc comments to XMLUtils

 refactor OEV to share more code with the NameNode
 -

 Key: HDFS-3050
 URL: https://issues.apache.org/jira/browse/HDFS-3050
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3050.006.patch, HDFS-3050.007.patch, 
 HDFS-3050.008.patch, HDFS-3050.009.patch, HDFS-3050.010.patch, 
 HDFS-3050.011.patch, HDFS-3050.012.patch, HDFS-3050.014.patch, 
 HDFS-3050.015.patch


 Current, OEV (the offline edits viewer) re-implements all of the opcode 
 parsing logic found in the NameNode.  This duplicated code creates a 
 maintenance burden for us.
 OEV should be refactored to simply use the normal EditLog parsing code, 
 rather than rolling its own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-29 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241614#comment-13241614
 ] 

Hadoop QA commented on HDFS-3142:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520483/HDFS-3142.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 24 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2119//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2119//console

This message is automatically generated.

 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker
 Attachments: HDFS-3142.patch


 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3066) cap space usage of default log4j rolling policy (hdfs specific changes)

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3066:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 0.23.3)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2. Thanks Pat!

 cap space usage of default log4j rolling policy (hdfs specific changes)
 ---

 Key: HDFS-3066
 URL: https://issues.apache.org/jira/browse/HDFS-3066
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Fix For: 2.0.0

 Attachments: HDFS-3066.patch


 see HADOOP-8149 for background on this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3066) cap space usage of default log4j rolling policy (hdfs specific changes)

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241760#comment-13241760
 ] 

Hudson commented on HDFS-3066:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2024 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2024/])
HDFS-3066. Cap space usage of default log4j rolling policy. Contributed by 
Patrick Hunt (Revision 1307100)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307100
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs


 cap space usage of default log4j rolling policy (hdfs specific changes)
 ---

 Key: HDFS-3066
 URL: https://issues.apache.org/jira/browse/HDFS-3066
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Fix For: 2.0.0

 Attachments: HDFS-3066.patch


 see HADOOP-8149 for background on this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3066) cap space usage of default log4j rolling policy (hdfs specific changes)

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241764#comment-13241764
 ] 

Hudson commented on HDFS-3066:
--

Integrated in Hadoop-Common-trunk-Commit #1949 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1949/])
HDFS-3066. Cap space usage of default log4j rolling policy. Contributed by 
Patrick Hunt (Revision 1307100)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307100
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs


 cap space usage of default log4j rolling policy (hdfs specific changes)
 ---

 Key: HDFS-3066
 URL: https://issues.apache.org/jira/browse/HDFS-3066
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Fix For: 2.0.0

 Attachments: HDFS-3066.patch


 see HADOOP-8149 for background on this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3044) fsck move should be non-destructive by default

2012-03-29 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3044:
---

Attachment: HDFS-3050-b1.001.patch

* port to branch-1

 fsck move should be non-destructive by default
 --

 Key: HDFS-3044
 URL: https://issues.apache.org/jira/browse/HDFS-3044
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
 Fix For: 2.0.0

 Attachments: HDFS-3044.002.patch, HDFS-3044.003.patch, 
 HDFS-3050-b1.001.patch


 The fsck move behavior in the code and originally articulated in HADOOP-101 
 is:
 {quote}Current failure modes for DFS involve blocks that are completely 
 missing. The only way to fix them would be to recover chains of blocks and 
 put them into lost+found{quote}
 A directory is created with the file name, the blocks that are accessible are 
 created as individual files in this directory, then the original file is 
 removed. 
 I suspect the rationale for this behavior was that you can't use files that 
 are missing locations, and copying the block as files at least makes part of 
 the files accessible. However this behavior can also result in permanent 
 dataloss. Eg:
 - Some datanodes don't come up (eg due to a HW issues) and checkin on cluster 
 startup, files with blocks where all replicas are on these set of datanodes 
 are marked corrupt
 - Admin does fsck move, which deletes the corrupt files, saves whatever 
 blocks were available
 - The HW issues with datanodes are resolved, they are started and join the 
 cluster. The NN tells them to delete their blocks for the corrupt files since 
 the file was deleted. 
 I think we should:
 - Make fsck move non-destructive by default (eg just does a move into 
 lost+found)
 - Make the destructive behavior optional (eg --destructive so admins think 
 about what they're doing)
 - Provide better sanity checks and warnings, eg if you're running fsck and 
 not all the slaves have checked in (if using dfs.hosts) then fsck should 
 print a warning indicating this that an admin should have to override if they 
 want to do something destructive

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3044) fsck move should be non-destructive by default

2012-03-29 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3044:
---

Attachment: HDFS-3044-b1.002.patch

* fix patch name

 fsck move should be non-destructive by default
 --

 Key: HDFS-3044
 URL: https://issues.apache.org/jira/browse/HDFS-3044
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
 Fix For: 2.0.0

 Attachments: HDFS-3044-b1.002.patch, HDFS-3044.002.patch, 
 HDFS-3044.003.patch


 The fsck move behavior in the code and originally articulated in HADOOP-101 
 is:
 {quote}Current failure modes for DFS involve blocks that are completely 
 missing. The only way to fix them would be to recover chains of blocks and 
 put them into lost+found{quote}
 A directory is created with the file name, the blocks that are accessible are 
 created as individual files in this directory, then the original file is 
 removed. 
 I suspect the rationale for this behavior was that you can't use files that 
 are missing locations, and copying the block as files at least makes part of 
 the files accessible. However this behavior can also result in permanent 
 dataloss. Eg:
 - Some datanodes don't come up (eg due to a HW issues) and checkin on cluster 
 startup, files with blocks where all replicas are on these set of datanodes 
 are marked corrupt
 - Admin does fsck move, which deletes the corrupt files, saves whatever 
 blocks were available
 - The HW issues with datanodes are resolved, they are started and join the 
 cluster. The NN tells them to delete their blocks for the corrupt files since 
 the file was deleted. 
 I think we should:
 - Make fsck move non-destructive by default (eg just does a move into 
 lost+found)
 - Make the destructive behavior optional (eg --destructive so admins think 
 about what they're doing)
 - Provide better sanity checks and warnings, eg if you're running fsck and 
 not all the slaves have checked in (if using dfs.hosts) then fsck should 
 print a warning indicating this that an admin should have to override if they 
 want to do something destructive

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3044) fsck move should be non-destructive by default

2012-03-29 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3044:
---

Attachment: (was: HDFS-3050-b1.001.patch)

 fsck move should be non-destructive by default
 --

 Key: HDFS-3044
 URL: https://issues.apache.org/jira/browse/HDFS-3044
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
 Fix For: 2.0.0

 Attachments: HDFS-3044-b1.002.patch, HDFS-3044.002.patch, 
 HDFS-3044.003.patch


 The fsck move behavior in the code and originally articulated in HADOOP-101 
 is:
 {quote}Current failure modes for DFS involve blocks that are completely 
 missing. The only way to fix them would be to recover chains of blocks and 
 put them into lost+found{quote}
 A directory is created with the file name, the blocks that are accessible are 
 created as individual files in this directory, then the original file is 
 removed. 
 I suspect the rationale for this behavior was that you can't use files that 
 are missing locations, and copying the block as files at least makes part of 
 the files accessible. However this behavior can also result in permanent 
 dataloss. Eg:
 - Some datanodes don't come up (eg due to a HW issues) and checkin on cluster 
 startup, files with blocks where all replicas are on these set of datanodes 
 are marked corrupt
 - Admin does fsck move, which deletes the corrupt files, saves whatever 
 blocks were available
 - The HW issues with datanodes are resolved, they are started and join the 
 cluster. The NN tells them to delete their blocks for the corrupt files since 
 the file was deleted. 
 I think we should:
 - Make fsck move non-destructive by default (eg just does a move into 
 lost+found)
 - Make the destructive behavior optional (eg --destructive so admins think 
 about what they're doing)
 - Provide better sanity checks and warnings, eg if you're running fsck and 
 not all the slaves have checked in (if using dfs.hosts) then fsck should 
 print a warning indicating this that an admin should have to override if they 
 want to do something destructive

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3066) cap space usage of default log4j rolling policy (hdfs specific changes)

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241783#comment-13241783
 ] 

Hudson commented on HDFS-3066:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1962 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1962/])
HDFS-3066. Cap space usage of default log4j rolling policy. Contributed by 
Patrick Hunt (Revision 1307100)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307100
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs


 cap space usage of default log4j rolling policy (hdfs specific changes)
 ---

 Key: HDFS-3066
 URL: https://issues.apache.org/jira/browse/HDFS-3066
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: scripts
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Fix For: 2.0.0

 Attachments: HDFS-3066.patch


 see HADOOP-8149 for background on this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3050) refactor OEV to share more code with the NameNode

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241827#comment-13241827
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3050:
--

Hi Colin, thanks for making the changes.

- This is not a refactoring anymore.  Could you revise the summary and 
description?

- There are editlog xml format changes.  Is there a compatibility issue?

- Please make sure that the patch passes Jenkins.

 refactor OEV to share more code with the NameNode
 -

 Key: HDFS-3050
 URL: https://issues.apache.org/jira/browse/HDFS-3050
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3050.006.patch, HDFS-3050.007.patch, 
 HDFS-3050.008.patch, HDFS-3050.009.patch, HDFS-3050.010.patch, 
 HDFS-3050.011.patch, HDFS-3050.012.patch, HDFS-3050.014.patch, 
 HDFS-3050.015.patch


 Current, OEV (the offline edits viewer) re-implements all of the opcode 
 parsing logic found in the NameNode.  This duplicated code creates a 
 maintenance burden for us.
 OEV should be refactored to simply use the normal EditLog parsing code, 
 rather than rolling its own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3044) fsck move should be non-destructive by default

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3044:
--

Fix Version/s: 1.1.0

+1 to the branch-1 patch. I've committed this. Thanks Colin.

 fsck move should be non-destructive by default
 --

 Key: HDFS-3044
 URL: https://issues.apache.org/jira/browse/HDFS-3044
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
 Fix For: 1.1.0, 2.0.0

 Attachments: HDFS-3044-b1.002.patch, HDFS-3044.002.patch, 
 HDFS-3044.003.patch


 The fsck move behavior in the code and originally articulated in HADOOP-101 
 is:
 {quote}Current failure modes for DFS involve blocks that are completely 
 missing. The only way to fix them would be to recover chains of blocks and 
 put them into lost+found{quote}
 A directory is created with the file name, the blocks that are accessible are 
 created as individual files in this directory, then the original file is 
 removed. 
 I suspect the rationale for this behavior was that you can't use files that 
 are missing locations, and copying the block as files at least makes part of 
 the files accessible. However this behavior can also result in permanent 
 dataloss. Eg:
 - Some datanodes don't come up (eg due to a HW issues) and checkin on cluster 
 startup, files with blocks where all replicas are on these set of datanodes 
 are marked corrupt
 - Admin does fsck move, which deletes the corrupt files, saves whatever 
 blocks were available
 - The HW issues with datanodes are resolved, they are started and join the 
 cluster. The NN tells them to delete their blocks for the corrupt files since 
 the file was deleted. 
 I think we should:
 - Make fsck move non-destructive by default (eg just does a move into 
 lost+found)
 - Make the destructive behavior optional (eg --destructive so admins think 
 about what they're doing)
 - Provide better sanity checks and warnings, eg if you're running fsck and 
 not all the slaves have checked in (if using dfs.hosts) then fsck should 
 print a warning indicating this that an admin should have to override if they 
 want to do something destructive

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-29 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241841#comment-13241841
 ] 

Aaron T. Myers commented on HDFS-3142:
--

+1, the patch looks good to me. I'll commit this momentarily.

 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker
 Attachments: HDFS-3142.patch


 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-29 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3142:
-

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2. Thanks a lot for the 
contribution, Brandon.

 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker
 Fix For: 2.0.0

 Attachments: HDFS-3142.patch


 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241851#comment-13241851
 ] 

Hudson commented on HDFS-3142:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2026 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2026/])
HDFS-3142. TestHDFSCLI.testAll is failing. Contributed by Brandon Li. 
(Revision 1307134)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307134
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml


 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker
 Fix For: 2.0.0

 Attachments: HDFS-3142.patch


 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3044) fsck move should be non-destructive by default

2012-03-29 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241854#comment-13241854
 ] 

Eli Collins commented on HDFS-3044:
---

Oops, I ran the test for sanity and looks like we need another rev.

 fsck move should be non-destructive by default
 --

 Key: HDFS-3044
 URL: https://issues.apache.org/jira/browse/HDFS-3044
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
 Fix For: 1.1.0, 2.0.0

 Attachments: HDFS-3044-b1.002.patch, HDFS-3044.002.patch, 
 HDFS-3044.003.patch


 The fsck move behavior in the code and originally articulated in HADOOP-101 
 is:
 {quote}Current failure modes for DFS involve blocks that are completely 
 missing. The only way to fix them would be to recover chains of blocks and 
 put them into lost+found{quote}
 A directory is created with the file name, the blocks that are accessible are 
 created as individual files in this directory, then the original file is 
 removed. 
 I suspect the rationale for this behavior was that you can't use files that 
 are missing locations, and copying the block as files at least makes part of 
 the files accessible. However this behavior can also result in permanent 
 dataloss. Eg:
 - Some datanodes don't come up (eg due to a HW issues) and checkin on cluster 
 startup, files with blocks where all replicas are on these set of datanodes 
 are marked corrupt
 - Admin does fsck move, which deletes the corrupt files, saves whatever 
 blocks were available
 - The HW issues with datanodes are resolved, they are started and join the 
 cluster. The NN tells them to delete their blocks for the corrupt files since 
 the file was deleted. 
 I think we should:
 - Make fsck move non-destructive by default (eg just does a move into 
 lost+found)
 - Make the destructive behavior optional (eg --destructive so admins think 
 about what they're doing)
 - Provide better sanity checks and warnings, eg if you're running fsck and 
 not all the slaves have checked in (if using dfs.hosts) then fsck should 
 print a warning indicating this that an admin should have to override if they 
 want to do something destructive

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241858#comment-13241858
 ] 

Hudson commented on HDFS-3142:
--

Integrated in Hadoop-Common-trunk-Commit #1951 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1951/])
HDFS-3142. TestHDFSCLI.testAll is failing. Contributed by Brandon Li. 
(Revision 1307134)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307134
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml


 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker
 Fix For: 2.0.0

 Attachments: HDFS-3142.patch


 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3163) TestHDFSCLI.testAll fails if the user name is not all lowercase

2012-03-29 Thread Brandon Li (Created) (JIRA)
TestHDFSCLI.testAll fails if the user name is not all lowercase
---

 Key: HDFS-3163
 URL: https://issues.apache.org/jira/browse/HDFS-3163
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Brandon Li
Priority: Trivial


In the test resource file testHDFSConf.xml, the test comparators expect user 
name to be all lowercase. 
If the user issuing the test has an uppercase in the username (e.g., Brandon 
instead of brandon), many RegexpComarator tests will fail. The following is one 
example:
{noformat} 
comparator
  typeRegexpComparator/type
  expected-output^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( 
)*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( )*/file1/expected-output
/comparator
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3163) TestHDFSCLI.testAll fails if the user name is not all lowercase

2012-03-29 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241885#comment-13241885
 ] 

Aaron T. Myers commented on HDFS-3163:
--

Good find, Brandon. Pretty funny, too. :)

If you post a patch, I'll review it promptly.

 TestHDFSCLI.testAll fails if the user name is not all lowercase
 ---

 Key: HDFS-3163
 URL: https://issues.apache.org/jira/browse/HDFS-3163
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Brandon Li
Priority: Trivial

 In the test resource file testHDFSConf.xml, the test comparators expect user 
 name to be all lowercase. 
 If the user issuing the test has an uppercase in the username (e.g., Brandon 
 instead of brandon), many RegexpComarator tests will fail. The following is 
 one example:
 {noformat} 
 comparator
   typeRegexpComparator/type
   expected-output^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( 
 )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( 
 )*/file1/expected-output
 /comparator
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3137:
--

Attachment: hdfs-3137.txt

Thanks for the great feedback Nicholas.

Patch attached, took your suggestions (and yes we can remove 
OP_DATANODE_ADD/REMOVE now).

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241903#comment-13241903
 ] 

Eli Collins commented on HDFS-3120:
---

Latest proposal:

For 1.x:
- Always enable the sync path (currently only enabled if dfs.support.append is 
set)
- Remove the dfs.support.append configuration option. Let's keep the code paths 
though in case we ever fix append on branch-1, in which case we can add the 
config option back

For 2.x:
- Always enable the hsync/hflush path
- The dfs.support.appends only enables the append specific paths (since the 
hsync/hflush paths are now always on). Append will still default to being 
enabled so there is no net effect by default

Sound good?

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241908#comment-13241908
 ] 

Todd Lipcon commented on HDFS-3120:
---

sounds good to me, +1 for that proposal

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241918#comment-13241918
 ] 

Hudson commented on HDFS-3142:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1964 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1964/])
HDFS-3142. TestHDFSCLI.testAll is failing. Contributed by Brandon Li. 
(Revision 1307134)

 Result = ABORTED
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307134
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml


 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker
 Fix For: 2.0.0

 Attachments: HDFS-3142.patch


 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3137:
-

Hadoop Flags: Reviewed

+1 patch looks good.

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3120) Provide ability to enable sync without append

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241930#comment-13241930
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3120:
--

+1 sounds good to me too.

 Provide ability to enable sync without append
 -

 Key: HDFS-3120
 URL: https://issues.apache.org/jira/browse/HDFS-3120
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 1.0.1
Reporter: Eli Collins
Assignee: Eli Collins

 The work on branch-20-append was to support *sync*, for durable HBase WALs, 
 not *append*. The branch-20-append implementation is known to be buggy. 
 There's been confusion about this, we often answer queries on the list [like 
 this|http://search-hadoop.com/m/wfed01VOIJ5]. Unfortunately, the way to 
 enable correct sync on branch-1 for HBase is to set dfs.support.append to 
 true in your config, which has the side effect of enabling append (which we 
 don't want to do).
 Let's add a new *dfs.support.sync* option that enables working sync (which is 
 basically the current dfs.support.append flag modulo one place where it's not 
 referring to sync). For compatibility, if dfs.support.append is set, 
 dfs.support.sync will be set as well. This way someone can enable sync for 
 HBase and still keep the current behavior that if dfs.support.append is not 
 set then an append operation will result in an IOE indicating append is not 
 supported. We should do this on trunk as well, as there's no reason to 
 conflate hsync and append with a single config even if append works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241941#comment-13241941
 ] 

Eli Collins commented on HDFS-3137:
---

Thanks Nicholas!

Btw I verified an upgrade from a populated (both image and logs) 18.3 install 
upgrade successfully to a trunk build (3.0-SNAPSHOT) with this change:

{noformat}
2012-03-29 16:50:05,947 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Sta
rting upgrade of image directory /home/eli/hadoop/dirs3/name.
   old LV = -16; old CTime = 0.
   new LV = -40; new CTime = 1333065005946
...
2012-03-29 16:50:06,003 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Upgrade of /home/eli/hadoop/dirs3/name is complete.
{noformat}

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241946#comment-13241946
 ] 

Hadoop QA commented on HDFS-3137:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520526/hdfs-3137.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 15 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2121//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2121//console

This message is automatically generated.

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241951#comment-13241951
 ] 

Hudson commented on HDFS-3137:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2027 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2027/])
HDFS-3137. Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16. Contributed by Eli 
Collins (Revision 1307173)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307173
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestDistributedUpgrade.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TestOfflineEditsViewer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-14-dfs-dir.tgz
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-dfs-dir.txt


 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241952#comment-13241952
 ] 

Hudson commented on HDFS-3137:
--

Integrated in Hadoop-Common-trunk-Commit #1952 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1952/])
HDFS-3137. Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16. Contributed by Eli 
Collins (Revision 1307173)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307173
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestDistributedUpgrade.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TestOfflineEditsViewer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-14-dfs-dir.tgz
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-dfs-dir.txt


 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3137:
--

Summary: Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16  (was: Bump 
LAST_UPGRADABLE_LAYOUT_VERSION)

 Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16
 --

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.0

 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3137:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
Release Note: Upgrade from Hadoop versions earlier than 0.18 is not 
supported as of 2.0. To upgrade from an earlier release, first upgrade to 0.18, 
and then upgrade again from there.
Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2. Thanks for the review Nicholas. 

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.0

 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3138) Move DatanodeInfo#ipcPort and hostName to DatanodeID

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3138:
--

Attachment: hdfs-3138.txt

Patch attached.

 Move DatanodeInfo#ipcPort and hostName to DatanodeID
 

 Key: HDFS-3138
 URL: https://issues.apache.org/jira/browse/HDFS-3138
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3138.txt, hdfs-3138.txt


 We can fix the following TODO once HDFS-3137 is committed. Also the hostName 
 field should be moved as well (it's not ephemeral, just gets set on 
 registration).
 {code}
 //TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
 out.writeShort(ipcPort);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241960#comment-13241960
 ] 

Hudson commented on HDFS-3137:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2028 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2028/])
Move HDFS-3137 to the right place in CHANGES.txt (Revision 1307174)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307174
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16
 --

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.0

 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241962#comment-13241962
 ] 

Hudson commented on HDFS-3137:
--

Integrated in Hadoop-Common-trunk-Commit #1953 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1953/])
Move HDFS-3137 to the right place in CHANGES.txt (Revision 1307174)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307174
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16
 --

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.0

 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-29 Thread Eli Collins (Created) (JIRA)
Move DatanodeInfo#hostName to DatanodeID


 Key: HDFS-3164
 URL: https://issues.apache.org/jira/browse/HDFS-3164
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Reporter: Eli Collins
Assignee: Eli Collins


Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
field with this IP). If the DN can specify both a name and hostname in the 
DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3138) Move DatanodeInfo#ipcPort to DatanodeID

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3138:
--

Description: 
We can fix the following TODO once HDFS-3137 is committed.

{code}
//TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
out.writeShort(ipcPort);
{code}

  was:
We can fix the following TODO once HDFS-3137 is committed. Also the hostName 
field should be moved as well (it's not ephemeral, just gets set on 
registration).

{code}
//TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
out.writeShort(ipcPort);
{code}

Summary: Move DatanodeInfo#ipcPort to DatanodeID  (was: Move 
DatanodeInfo#ipcPort and hostName to DatanodeID)

Filed HDFS-3164 to handle the hostname field since it's a more involved change. 

 Move DatanodeInfo#ipcPort to DatanodeID
 ---

 Key: HDFS-3138
 URL: https://issues.apache.org/jira/browse/HDFS-3138
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3138.txt, hdfs-3138.txt


 We can fix the following TODO once HDFS-3137 is committed.
 {code}
 //TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
 out.writeShort(ipcPort);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3138) Move DatanodeInfo#ipcPort to DatanodeID

2012-03-29 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241963#comment-13241963
 ] 

Aaron T. Myers commented on HDFS-3138:
--

The change looks good to me. +1 pending Jenkins.

 Move DatanodeInfo#ipcPort to DatanodeID
 ---

 Key: HDFS-3138
 URL: https://issues.apache.org/jira/browse/HDFS-3138
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3138.txt, hdfs-3138.txt


 We can fix the following TODO once HDFS-3137 is committed.
 {code}
 //TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
 out.writeShort(ipcPort);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3138) Move DatanodeInfo#ipcPort to DatanodeID

2012-03-29 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3138:
--

Target Version/s: 2.0.0  (was: 0.23.3)
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 Move DatanodeInfo#ipcPort to DatanodeID
 ---

 Key: HDFS-3138
 URL: https://issues.apache.org/jira/browse/HDFS-3138
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3138.txt, hdfs-3138.txt


 We can fix the following TODO once HDFS-3137 is committed.
 {code}
 //TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
 out.writeShort(ipcPort);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241976#comment-13241976
 ] 

Hudson commented on HDFS-3137:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1965 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1965/])
HDFS-3137. Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16. Contributed by Eli 
Collins (Revision 1307173)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307173
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestDistributedUpgrade.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TestOfflineEditsViewer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-14-dfs-dir.tgz
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-dfs-dir.txt


 Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16
 --

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.0

 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2998) OfflineImageViewer and ImageVisitor should be annotated public

2012-03-29 Thread Colin Patrick McCabe (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241977#comment-13241977
 ] 

Colin Patrick McCabe commented on HDFS-2998:


Ok, I've looked at a little more of the context here, and I now agree that we 
should annotate these as public.

It looks like the API that OfflineImageViewer presents to the world is one that 
doesn't have a direct dependency on how the image is stored on-disk.  So we 
will be able to change things in the future, as long as we have some backwards 
compatibility story here.  Sorry for the noise, just was concerned about 
compatibility for a moment there.

 OfflineImageViewer and ImageVisitor should be annotated public
 --

 Key: HDFS-2998
 URL: https://issues.apache.org/jira/browse/HDFS-2998
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.23.1
Reporter: Aaron T. Myers

 The OfflineImageViewer is currently annotated as InterfaceAudience.Private. 
 It's intended for subclassing, so it should be annotated as the public API 
 that it is.
 The ImageVisitor class should similarly be annotated public (evolving is 
 fine). Note that it should also be changed to be public; it's currently 
 package-private, which means that users have to cheat with their subclass 
 package name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3126) Journal stream from the namenode to backup needs to have a timeout

2012-03-29 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HDFS-3126:
---

Attachment: hdfs-3126.patch

 Journal stream from the namenode to backup needs to have a timeout
 --

 Key: HDFS-3126
 URL: https://issues.apache.org/jira/browse/HDFS-3126
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: hdfs-3126.patch, hdfs-3126.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3130:
-

Attachment: svn_mv.sh
h3130_20120329b_svn_mv.patch

svn_mv.sh: added svn mv ReplicasMap.java $PKG/ReplicaMap.java

h3130_20120329b_svn_mv.patch: improved the log/error messages.

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, 
 h3130_20120329b_svn_mv.patch, svn_mv.sh, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3130:
-

Attachment: h3130_20120329b.patch

h3130_20120329b.patch: for Jenkins.

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, h3130_20120329b.patch, 
 h3130_20120329b_svn_mv.patch, svn_mv.sh, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3126) Journal stream from the namenode to backup needs to have a timeout

2012-03-29 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13241986#comment-13241986
 ] 

Hari Mankude commented on HDFS-3126:


Attaching a patch which increases the rpc timeout to 30secs.

Without rpc timeout, journal call from active to backup blocks if backup node 
has an uncontrolled shutdown. Also, journal is added with required set to true. 
This causes active NN to exit when backupnode has an uncontrolled shutdown.



 Journal stream from the namenode to backup needs to have a timeout
 --

 Key: HDFS-3126
 URL: https://issues.apache.org/jira/browse/HDFS-3126
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: hdfs-3126.patch, hdfs-3126.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3130:
-

Attachment: h3130_20120329b.patch

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, h3130_20120329b.patch, 
 h3130_20120329b.patch, h3130_20120329b_svn_mv.patch, svn_mv.sh, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3130:
-

Attachment: (was: h3130_20120329b.patch)

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, h3130_20120329b.patch, 
 h3130_20120329b_svn_mv.patch, svn_mv.sh, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-29 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3130:
-

Status: Patch Available  (was: Open)

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, h3130_20120329b.patch, 
 h3130_20120329b_svn_mv.patch, svn_mv.sh, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3004) Implement Recovery Mode

2012-03-29 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3004:
---

Attachment: HDFS-3004.035.patch

* TestNameNodeRecovery: don't need data nodes for this test

* TestNameNodeRecovery: use set rather than array


 Implement Recovery Mode
 ---

 Key: HDFS-3004
 URL: https://issues.apache.org/jira/browse/HDFS-3004
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: tools
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-3004.010.patch, HDFS-3004.011.patch, 
 HDFS-3004.012.patch, HDFS-3004.013.patch, HDFS-3004.015.patch, 
 HDFS-3004.016.patch, HDFS-3004.017.patch, HDFS-3004.018.patch, 
 HDFS-3004.019.patch, HDFS-3004.020.patch, HDFS-3004.022.patch, 
 HDFS-3004.023.patch, HDFS-3004.024.patch, HDFS-3004.026.patch, 
 HDFS-3004.027.patch, HDFS-3004.029.patch, HDFS-3004.030.patch, 
 HDFS-3004.031.patch, HDFS-3004.032.patch, HDFS-3004.033.patch, 
 HDFS-3004.034.patch, HDFS-3004.035.patch, 
 HDFS-3004__namenode_recovery_tool.txt


 When the NameNode metadata is corrupt for some reason, we want to be able to 
 fix it.  Obviously, we would prefer never to get in this case.  In a perfect 
 world, we never would.  However, bad data on disk can happen from time to 
 time, because of hardware errors or misconfigurations.  In the past we have 
 had to correct it manually, which is time-consuming and which can result in 
 downtime.
 Recovery mode is initialized by the system administrator.  When the NameNode 
 starts up in Recovery Mode, it will try to load the FSImage file, apply all 
 the edits from the edits log, and then write out a new image.  Then it will 
 shut down.
 Unlike in the normal startup process, the recovery mode startup process will 
 be interactive.  When the NameNode finds something that is inconsistent, it 
 will prompt the operator as to what it should do.   The operator can also 
 choose to take the first option for all prompts by starting up with the '-f' 
 flag, or typing 'a' at one of the prompts.
 I have reused as much code as possible from the NameNode in this tool.  
 Hopefully, the effort that was spent developing this will also make the 
 NameNode editLog and image processing even more robust than it already is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16

2012-03-29 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13242000#comment-13242000
 ] 

Hudson commented on HDFS-3137:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1966 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1966/])
Move HDFS-3137 to the right place in CHANGES.txt (Revision 1307174)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307174
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16
 --

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.0

 Attachments: hdfs-3137.txt, hdfs-3137.txt, hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3138) Move DatanodeInfo#ipcPort to DatanodeID

2012-03-29 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13242004#comment-13242004
 ] 

Hadoop QA commented on HDFS-3138:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520535/hdfs-3138.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestHFlush

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2122//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2122//console

This message is automatically generated.

 Move DatanodeInfo#ipcPort to DatanodeID
 ---

 Key: HDFS-3138
 URL: https://issues.apache.org/jira/browse/HDFS-3138
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3138.txt, hdfs-3138.txt


 We can fix the following TODO once HDFS-3137 is committed.
 {code}
 //TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
 out.writeShort(ipcPort);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3126) Journal stream from the namenode to backup needs to have a timeout

2012-03-29 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13242009#comment-13242009
 ] 

Hadoop QA commented on HDFS-3126:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520539/hdfs-3126.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2123//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2123//console

This message is automatically generated.

 Journal stream from the namenode to backup needs to have a timeout
 --

 Key: HDFS-3126
 URL: https://issues.apache.org/jira/browse/HDFS-3126
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: hdfs-3126.patch, hdfs-3126.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-29 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13242015#comment-13242015
 ] 

Hadoop QA commented on HDFS-3130:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520543/h3130_20120329b.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 54 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated 1 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2124//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2124//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2124//console

This message is automatically generated.

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, h3130_20120329b.patch, 
 h3130_20120329b_svn_mv.patch, svn_mv.sh, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-29 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13242016#comment-13242016
 ] 

Uma Maheswara Rao G commented on HDFS-3130:
---

I did not looked at the patch completely. Will update the review details this 
afternoon.

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, h3130_20120329b.patch, 
 h3130_20120329b_svn_mv.patch, svn_mv.sh, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-29 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13242020#comment-13242020
 ] 

Uma Maheswara Rao G commented on HDFS-3130:
---

{quote}
Method 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getTmpInputStreams(ExtendedBlock,
 long, long) may fail to clean up java.io.InputStream
{quote}
findbug comment seems to be related.

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, h3130_20120329b.patch, 
 h3130_20120329b_svn_mv.patch, svn_mv.sh, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3004) Implement Recovery Mode

2012-03-29 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13242021#comment-13242021
 ] 

Hadoop QA commented on HDFS-3004:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520544/HDFS-3004.035.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 21 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2125//console

This message is automatically generated.

 Implement Recovery Mode
 ---

 Key: HDFS-3004
 URL: https://issues.apache.org/jira/browse/HDFS-3004
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: tools
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-3004.010.patch, HDFS-3004.011.patch, 
 HDFS-3004.012.patch, HDFS-3004.013.patch, HDFS-3004.015.patch, 
 HDFS-3004.016.patch, HDFS-3004.017.patch, HDFS-3004.018.patch, 
 HDFS-3004.019.patch, HDFS-3004.020.patch, HDFS-3004.022.patch, 
 HDFS-3004.023.patch, HDFS-3004.024.patch, HDFS-3004.026.patch, 
 HDFS-3004.027.patch, HDFS-3004.029.patch, HDFS-3004.030.patch, 
 HDFS-3004.031.patch, HDFS-3004.032.patch, HDFS-3004.033.patch, 
 HDFS-3004.034.patch, HDFS-3004.035.patch, 
 HDFS-3004__namenode_recovery_tool.txt


 When the NameNode metadata is corrupt for some reason, we want to be able to 
 fix it.  Obviously, we would prefer never to get in this case.  In a perfect 
 world, we never would.  However, bad data on disk can happen from time to 
 time, because of hardware errors or misconfigurations.  In the past we have 
 had to correct it manually, which is time-consuming and which can result in 
 downtime.
 Recovery mode is initialized by the system administrator.  When the NameNode 
 starts up in Recovery Mode, it will try to load the FSImage file, apply all 
 the edits from the edits log, and then write out a new image.  Then it will 
 shut down.
 Unlike in the normal startup process, the recovery mode startup process will 
 be interactive.  When the NameNode finds something that is inconsistent, it 
 will prompt the operator as to what it should do.   The operator can also 
 choose to take the first option for all prompts by starting up with the '-f' 
 flag, or typing 'a' at one of the prompts.
 I have reused as much code as possible from the NameNode in this tool.  
 Hopefully, the effort that was spent developing this will also make the 
 NameNode editLog and image processing even more robust than it already is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >