[jira] [Commented] (HDFS-3493) Replication is not happened for the block (which is recovered and in finalized) to the Datanode which has got the same block with old generation timestamp in RBW

2012-06-03 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13288357#comment-13288357
 ] 

Uma Maheswara Rao G commented on HDFS-3493:
---

By seeing the HDFS-2290, this is an existing behaviour currently. Until unless, 
it replicate new block, it won't delete the corrupt block. 
Already asserted in some test cases in HDFS-2290.

see test from HDFS-2290:

{code}
/**
+   * The corrupt block has to be removed when the number of valid replicas
+   * matches replication factor for the file. In this test, the above 
+   * condition is achieved by increasing the number of good replicas by 
+   * replicating on a new Datanode. 
+   * The test strategy : 
+   *   Bring up Cluster with 3 DataNodes
+   *   Create a file  of replication factor 3
+   *   Corrupt one replica of a block of the file 
+   *   Verify that there are still 2 good replicas and 1 corrupt replica 
+   * (corrupt replica should not be removed since number of good replicas
+   *  (2) is less  than replication factor (3)) 
+   *   Start a new data node 
+   *   Verify that the a new replica is created and corrupt replica is
+   *   removed.
+   * 
+   */
+  @Test
+  public void testByAddingAnExtraDataNode() throws IOException {
{code}


the below condition will not allow to invalidate block, even if we corrupt one 
more block.

{code}
node.addBlock(storedBlock);

// Add this replica to corruptReplicas Map
corruptReplicas.addToCorruptReplicasMap(storedBlock, node, reason);
if (countNodes(storedBlock).liveReplicas() >= bc.getReplication()) {
{code}

Replication factor 3, corrupt replica is 1, and live replicas are 2.
So, the above condition will not satisfy. It will just add into 
neededReplications. Since we don't have one more DN here, it will not be able 
to replicate also.

Here my concern is, if we corrupt one more block, 
Replication factor 3, corrupt replicas are 2, and live replica is 1.
Now also it will not be able to replicate and invalidate, we may end up running 
with one good replica, even though we have 3 DNs in cluster. In this smaller 
cluster, this is risk. In bigger clusters this problem will not come because, 
we will have some more nodes in cluster, so, the replication will happen 
suucessfully. This is only problem when we have cluster size is equal to 
replication factor at that moment.

> Replication is not happened for the block (which is recovered and in 
> finalized) to the Datanode which has got the same block with old generation 
> timestamp in RBW
> -
>
> Key: HDFS-3493
> URL: https://issues.apache.org/jira/browse/HDFS-3493
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.1-alpha
>Reporter: J.Andreina
>
> replication factor= 3, block report interval= 1min and start NN and 3DN
> Step 1:Write a file without close and do hflush (Dn1,DN2,DN3 has blk_ts1)
> Step 2:Stopped DN3
> Step 3:recovery happens and time stamp updated(blk_ts2)
> Step 4:close the file
> Step 5:blk_ts2 is finalized and available in DN1 and Dn2
> Step 6:now restarted DN3(which has got blk_ts1 in rbw)
> From the NN side there is no cmd issued to DN3 to delete the blk_ts1 . But 
> ask DN3 to make the block as corrupt .
> Replication of blk_ts2 to DN3 is not happened.
> NN logs:
> 
> {noformat}
> INFO org.apache.hadoop.hdfs.StateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: duplicate requested for 
> blk_3927215081484173742 to add as corrupt on XX.XX.XX.XX:50276 by 
> /XX.XX.XX.XX because reported RWR replica with genstamp 1007 does not match 
> COMPLETE block's genstamp in block map 1008
> INFO org.apache.hadoop.hdfs.StateChange: BLOCK* processReport: from 
> DatanodeRegistration(XX.XX.XX.XX, 
> storageID=DS-443871816-XX.XX.XX.XX-50276-1336829714197, infoPort=50275, 
> ipcPort=50277, 
> storageInfo=lv=-40;cid=CID-e654ac13-92dc-4f82-a22b-c0b6861d06d7;nsid=2063001898;c=0),
>  blocks: 2, processing time: 1 msecs
> INFO org.apache.hadoop.hdfs.StateChange: BLOCK* Removing block 
> blk_3927215081484173742_1008 from neededReplications as it has enough 
> replicas.
> INFO org.apache.hadoop.hdfs.StateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: duplicate requested for 
> blk_3927215081484173742 to add as corrupt on XX.XX.XX.XX:50276 by 
> /XX.XX.XX.XX because reported RWR replica with genstamp 1007 does not match 
> COMPLETE block's genstamp in block map 1008
> INFO org.apache.hadoop.hdfs.StateChange: BLOCK* processReport: from 
> DatanodeRegistration(XX.XX.XX.XX, 
> storageID=DS-443871816-XX.XX.XX.XX-50276-1336829714197, infoPort=50275, 
> ipcPort=50277, 
> storageInfo=lv=-40;cid=CID-e654ac13-92dc-4f82-a22b-c0b6861d06d7;nsid=2063001898;c=0),
> 

[jira] [Created] (HDFS-3493) Replication is not happened for the block (which is recovered and in finalized) to the Datanode which has got the same block with old generation timestamp in RBW

2012-06-03 Thread J.Andreina (JIRA)
J.Andreina created HDFS-3493:


 Summary: Replication is not happened for the block (which is 
recovered and in finalized) to the Datanode which has got the same block with 
old generation timestamp in RBW
 Key: HDFS-3493
 URL: https://issues.apache.org/jira/browse/HDFS-3493
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: J.Andreina


replication factor= 3, block report interval= 1min and start NN and 3DN

Step 1:Write a file without close and do hflush (Dn1,DN2,DN3 has blk_ts1)
Step 2:Stopped DN3
Step 3:recovery happens and time stamp updated(blk_ts2)
Step 4:close the file
Step 5:blk_ts2 is finalized and available in DN1 and Dn2
Step 6:now restarted DN3(which has got blk_ts1 in rbw)

>From the NN side there is no cmd issued to DN3 to delete the blk_ts1 . But ask 
>DN3 to make the block as corrupt .
Replication of blk_ts2 to DN3 is not happened.

NN logs:

{noformat}
INFO org.apache.hadoop.hdfs.StateChange: BLOCK 
NameSystem.addToCorruptReplicasMap: duplicate requested for 
blk_3927215081484173742 to add as corrupt on XX.XX.XX.XX:50276 by /XX.XX.XX.XX 
because reported RWR replica with genstamp 1007 does not match COMPLETE block's 
genstamp in block map 1008
INFO org.apache.hadoop.hdfs.StateChange: BLOCK* processReport: from 
DatanodeRegistration(XX.XX.XX.XX, 
storageID=DS-443871816-XX.XX.XX.XX-50276-1336829714197, infoPort=50275, 
ipcPort=50277, 
storageInfo=lv=-40;cid=CID-e654ac13-92dc-4f82-a22b-c0b6861d06d7;nsid=2063001898;c=0),
 blocks: 2, processing time: 1 msecs
INFO org.apache.hadoop.hdfs.StateChange: BLOCK* Removing block 
blk_3927215081484173742_1008 from neededReplications as it has enough replicas.

INFO org.apache.hadoop.hdfs.StateChange: BLOCK 
NameSystem.addToCorruptReplicasMap: duplicate requested for 
blk_3927215081484173742 to add as corrupt on XX.XX.XX.XX:50276 by /XX.XX.XX.XX 
because reported RWR replica with genstamp 1007 does not match COMPLETE block's 
genstamp in block map 1008
INFO org.apache.hadoop.hdfs.StateChange: BLOCK* processReport: from 
DatanodeRegistration(XX.XX.XX.XX, 
storageID=DS-443871816-XX.XX.XX.XX-50276-1336829714197, infoPort=50275, 
ipcPort=50277, 
storageInfo=lv=-40;cid=CID-e654ac13-92dc-4f82-a22b-c0b6861d06d7;nsid=2063001898;c=0),
 blocks: 2, processing time: 1 msecs
WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not 
able to place enough replicas, still in need of 1 to reach 1
For more information, please enable DEBUG log level on 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
{noformat}

fsck Report
===
{noformat}
/file21:  Under replicated 
BP-1008469586-XX.XX.XX.XX-1336829603103:blk_3927215081484173742_1008. Target 
Replicas is 3 but found 2 replica(s).
.Status: HEALTHY
 Total size:495 B
 Total dirs:1
 Total files:   3
 Total blocks (validated):  3 (avg. block size 165 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   1 (33.32 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:1
 Average block replication: 2.0
 Corrupt blocks:0
 Missing replicas:  1 (14.285714 %)
 Number of data-nodes:  3
 Number of racks:   1
FSCK ended at Sun May 13 09:49:05 IST 2012 in 9 milliseconds
The filesystem under path '/' is HEALTHY
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3492) fix some misuses of InputStream#skip

2012-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13288253#comment-13288253
 ] 

Hadoop QA commented on HDFS-3492:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530706/HDFS-3492.001.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2578//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2578//console

This message is automatically generated.

> fix some misuses of InputStream#skip
> 
>
> Key: HDFS-3492
> URL: https://issues.apache.org/jira/browse/HDFS-3492
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3492.001.patch
>
>
> It seems that we have a few cases where programmers are calling 
> InputStream#skip and not handling "short skips."  Unfortunately, the skip 
> method is documented and implemented so that it doesn't actually skip the 
> requested number of bytes, but simply tries to skip at most that amount of 
> bytes.  A better name probably would have been trySkip or similar.
> It seems like most of the time when the argument to skip is small enough, 
> we'll succeed almost all of the time.  This is no doubt an implementation 
> artifact of some of the popular stream implementations.  This tends to hide 
> the bug-- however, it is still waiting to emerge at some point if those 
> implementations ever change or if buffer sizes are adjusted, etc.
> All of these cases can be fixed by calling IOUtils#skipFully to get the 
> behavior that the programmer expects-- i.e., skipping by the specified amount.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3492) fix some misuses of InputStream#skip

2012-06-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3492:
---

Description: 
It seems that we have a few cases where programmers are calling 
InputStream#skip and not handling "short skips."  Unfortunately, the skip 
method is documented and implemented so that it doesn't actually skip the 
requested number of bytes, but simply tries to skip at most that amount of 
bytes.  A better name probably would have been trySkip or similar.

It seems like most of the time when the argument to skip is small enough, we'll 
succeed almost all of the time.  This is no doubt an implementation artifact of 
some of the popular stream implementations.  This tends to hide the bug-- 
however, it is still waiting to emerge at some point if those implementations 
ever change or if buffer sizes are adjusted, etc.

All of these cases can be fixed by calling IOUtils#skipFully to get the 
behavior that the programmer expects-- i.e., skipping by the specified amount.
Environment: (was: It seems that we have a few cases where programmers 
are calling InputStream#skip and not handling "short skips."  Unfortunately, 
the skip method is documented and implemented so that it doesn't actually skip 
the requested number of bytes, but simply tries to skip at most that amount of 
bytes.  A better name probably would have been trySkip or similar.

It seems like most of the time when the argument to skip is small enough, we'll 
succeed almost all of the time.  This is no doubt an implementation artifact of 
some of the popular stream implementations.  This tends to hide the bug-- 
however, it is still waiting to emerge at some point if those implementations 
ever change or if buffer sizes are adjusted, etc.

All of these cases can be fixed by calling IOUtils#skipFully to get the 
behavior that the programmer expects-- i.e., skipping by the specified amount.)

> fix some misuses of InputStream#skip
> 
>
> Key: HDFS-3492
> URL: https://issues.apache.org/jira/browse/HDFS-3492
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3492.001.patch
>
>
> It seems that we have a few cases where programmers are calling 
> InputStream#skip and not handling "short skips."  Unfortunately, the skip 
> method is documented and implemented so that it doesn't actually skip the 
> requested number of bytes, but simply tries to skip at most that amount of 
> bytes.  A better name probably would have been trySkip or similar.
> It seems like most of the time when the argument to skip is small enough, 
> we'll succeed almost all of the time.  This is no doubt an implementation 
> artifact of some of the popular stream implementations.  This tends to hide 
> the bug-- however, it is still waiting to emerge at some point if those 
> implementations ever change or if buffer sizes are adjusted, etc.
> All of these cases can be fixed by calling IOUtils#skipFully to get the 
> behavior that the programmer expects-- i.e., skipping by the specified amount.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3492) fix some misuses of InputStream#skip

2012-06-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3492:
---

Attachment: HDFS-3492.001.patch

> fix some misuses of InputStream#skip
> 
>
> Key: HDFS-3492
> URL: https://issues.apache.org/jira/browse/HDFS-3492
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3492.001.patch
>
>
> It seems that we have a few cases where programmers are calling 
> InputStream#skip and not handling "short skips."  Unfortunately, the skip 
> method is documented and implemented so that it doesn't actually skip the 
> requested number of bytes, but simply tries to skip at most that amount of 
> bytes.  A better name probably would have been trySkip or similar.
> It seems like most of the time when the argument to skip is small enough, 
> we'll succeed almost all of the time.  This is no doubt an implementation 
> artifact of some of the popular stream implementations.  This tends to hide 
> the bug-- however, it is still waiting to emerge at some point if those 
> implementations ever change or if buffer sizes are adjusted, etc.
> All of these cases can be fixed by calling IOUtils#skipFully to get the 
> behavior that the programmer expects-- i.e., skipping by the specified amount.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3492) fix some misuses of InputStream#skip

2012-06-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3492:
---

Status: Patch Available  (was: Open)

> fix some misuses of InputStream#skip
> 
>
> Key: HDFS-3492
> URL: https://issues.apache.org/jira/browse/HDFS-3492
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3492.001.patch
>
>
> It seems that we have a few cases where programmers are calling 
> InputStream#skip and not handling "short skips."  Unfortunately, the skip 
> method is documented and implemented so that it doesn't actually skip the 
> requested number of bytes, but simply tries to skip at most that amount of 
> bytes.  A better name probably would have been trySkip or similar.
> It seems like most of the time when the argument to skip is small enough, 
> we'll succeed almost all of the time.  This is no doubt an implementation 
> artifact of some of the popular stream implementations.  This tends to hide 
> the bug-- however, it is still waiting to emerge at some point if those 
> implementations ever change or if buffer sizes are adjusted, etc.
> All of these cases can be fixed by calling IOUtils#skipFully to get the 
> behavior that the programmer expects-- i.e., skipping by the specified amount.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3492) fix some misuses of InputStream#skip

2012-06-03 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-3492:
--

 Summary: fix some misuses of InputStream#skip
 Key: HDFS-3492
 URL: https://issues.apache.org/jira/browse/HDFS-3492
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: It seems that we have a few cases where programmers are 
calling InputStream#skip and not handling "short skips."  Unfortunately, the 
skip method is documented and implemented so that it doesn't actually skip the 
requested number of bytes, but simply tries to skip at most that amount of 
bytes.  A better name probably would have been trySkip or similar.

It seems like most of the time when the argument to skip is small enough, we'll 
succeed almost all of the time.  This is no doubt an implementation artifact of 
some of the popular stream implementations.  This tends to hide the bug-- 
however, it is still waiting to emerge at some point if those implementations 
ever change or if buffer sizes are adjusted, etc.

All of these cases can be fixed by calling IOUtils#skipFully to get the 
behavior that the programmer expects-- i.e., skipping by the specified amount.
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2797) HA: Add a test for the standby becoming active after a partial transaction is logged

2012-06-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13288236#comment-13288236
 ] 

Hadoop QA commented on HDFS-2797:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530702/HDFS-2797.002.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2577//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2577//console

This message is automatically generated.

> HA: Add a test for the standby becoming active after a partial transaction is 
> logged
> 
>
> Key: HDFS-2797
> URL: https://issues.apache.org/jira/browse/HDFS-2797
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: ha, name-node
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-2797.001.patch, HDFS-2797.002.patch
>
>
> The existing failover tests only cover the case of graceful shutdown. We 
> should make sure the standby NN can become active even when the final 
> transaction in an edit log is only partially written to disk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2797) HA: Add a test for the standby becoming active after a partial transaction is logged

2012-06-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13288231#comment-13288231
 ] 

Colin Patrick McCabe commented on HDFS-2797:


Thanks, atm.  On closer examination, there's already IOUtils#skipFully, which 
does exactly what I want.

> HA: Add a test for the standby becoming active after a partial transaction is 
> logged
> 
>
> Key: HDFS-2797
> URL: https://issues.apache.org/jira/browse/HDFS-2797
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: ha, name-node
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-2797.001.patch, HDFS-2797.002.patch
>
>
> The existing failover tests only cover the case of graceful shutdown. We 
> should make sure the standby NN can become active even when the final 
> transaction in an edit log is only partially written to disk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2797) HA: Add a test for the standby becoming active after a partial transaction is logged

2012-06-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-2797:
---

Attachment: HDFS-2797.002.patch

* use IOUtils#skipFully

> HA: Add a test for the standby becoming active after a partial transaction is 
> logged
> 
>
> Key: HDFS-2797
> URL: https://issues.apache.org/jira/browse/HDFS-2797
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: ha, name-node
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-2797.001.patch, HDFS-2797.002.patch
>
>
> The existing failover tests only cover the case of graceful shutdown. We 
> should make sure the standby NN can become active even when the final 
> transaction in an edit log is only partially written to disk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2025) Go Back to File View link is not working in tail.jsp

2012-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13288161#comment-13288161
 ] 

Hudson commented on HDFS-2025:
--

Integrated in Hadoop-Mapreduce-trunk #1099 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1099/])
HDFS-2025. Go Back to File View link is not working in tail.jsp. 
Contributed by Ashish and Sravan. (Revision 1345563)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1345563
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeJsp.java


> Go Back to File View link is not working in tail.jsp
> 
>
> Key: HDFS-2025
> URL: https://issues.apache.org/jira/browse/HDFS-2025
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: sravankorumilli
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.1-alpha, 3.0.0
>
> Attachments: HDFS-2025.patch, HDFS-2025_1.patch, HDFS-2025_2.patch, 
> HDFS-2025_3.patch, HDFS-2025_4.patch, HDFS-2025_5.patch, ScreenShot_1.jpg
>
>
> While browsing the file system.
> Click on any file link to go to the page where the file contents are 
> displayed, then when we click on '*Tail this file*' link.
> The control will go to the tail.jsp here when we
> Click on '*Go Back to File View*' option.
> HTTP Error page not found will come.
> This is because the referrer URL is encoded and the encoded URL is itself 
> being used in the '*Go Back to File View*' hyperlink which will be treated as 
> a relative URL and thus the HTTP request will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2025) Go Back to File View link is not working in tail.jsp

2012-06-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13288147#comment-13288147
 ] 

Hudson commented on HDFS-2025:
--

Integrated in Hadoop-Hdfs-trunk #1065 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1065/])
HDFS-2025. Go Back to File View link is not working in tail.jsp. 
Contributed by Ashish and Sravan. (Revision 1345563)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1345563
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeJsp.java


> Go Back to File View link is not working in tail.jsp
> 
>
> Key: HDFS-2025
> URL: https://issues.apache.org/jira/browse/HDFS-2025
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: sravankorumilli
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.1-alpha, 3.0.0
>
> Attachments: HDFS-2025.patch, HDFS-2025_1.patch, HDFS-2025_2.patch, 
> HDFS-2025_3.patch, HDFS-2025_4.patch, HDFS-2025_5.patch, ScreenShot_1.jpg
>
>
> While browsing the file system.
> Click on any file link to go to the page where the file contents are 
> displayed, then when we click on '*Tail this file*' link.
> The control will go to the tail.jsp here when we
> Click on '*Go Back to File View*' option.
> HTTP Error page not found will come.
> This is because the referrer URL is encoded and the encoded URL is itself 
> being used in the '*Go Back to File View*' hyperlink which will be treated as 
> a relative URL and thus the HTTP request will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira