[jira] [Commented] (HDFS-5525) Inline dust templates

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830639#comment-13830639
 ] 

Hudson commented on HDFS-5525:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #400 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/400/])
Move HDFS-5444 and HDFS-5525 to branch 2.3.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544631)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Inline dust templates
 -

 Key: HDFS-5525
 URL: https://issues.apache.org/jira/browse/HDFS-5525
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-5525.000.patch, HDFS-5525.000.patch, screenshot.png


 Currently the dust templates are stored as separate files on the server side. 
 The web UI has to make separate HTTP requests to load the templates, which 
 increases the network overheads and page load latency.
 This jira proposes to inline all dust templates with the main HTML file, so 
 that the page can be loaded faster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5444) Choose default web UI based on browser capabilities

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830638#comment-13830638
 ] 

Hudson commented on HDFS-5444:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #400 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/400/])
Move HDFS-5444 and HDFS-5525 to branch 2.3.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544631)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Choose default web UI based on browser capabilities
 ---

 Key: HDFS-5444
 URL: https://issues.apache.org/jira/browse/HDFS-5444
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-5444.000.patch, HDFS-5444.000.patch, 
 HDFS-5444.001.patch, Screenshot-new.png, Screenshot-old.png


 This jira changes the entrance of the web UI -- so that modern browsers with 
 JavaScript support are redirected to the new web UI, while other browsers 
 will automatically fall back to the old JSP based UI.
 It also add hyperlinks in both UIs to facilitate testings and evaluation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830636#comment-13830636
 ] 

Hudson commented on HDFS-5544:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #400 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/400/])
HDFS-5544. Adding Test case For Checking dfs.checksum.type as NULL value. 
Contributed by Sathish. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544596)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFSOutputSummer.java


 Adding Test case For Checking dfs.checksum type as NULL value
 -

 Key: HDFS-5544
 URL: https://issues.apache.org/jira/browse/HDFS-5544
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.1.0-beta
 Environment: HDFS-TEST
Reporter: sathish
Assignee: sathish
Priority: Minor
 Fix For: 3.0.0, 2.3.0, 2.2.1

 Attachments: HDFS-5544.patch


 https://issues.apache.org/jira/i#browse/HADOOP-9114,
 For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
 case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of Cluster summay in dfshealth.html

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830653#comment-13830653
 ] 

Hudson commented on HDFS-5552:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1591 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1591/])
HDFS-5552. Fix wrong information of Cluster summay in dfshealth.html. 
Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544627)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dust-helpers-1.1.1.min.js


 Fix wrong information of Cluster summay in dfshealth.html
 ---

 Key: HDFS-5552
 URL: https://issues.apache.org/jira/browse/HDFS-5552
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HDFS-5552.000.patch, dfshealth-html.png


 files and directories + blocks = total filesystem object(s). But wrong 
 value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830652#comment-13830652
 ] 

Hudson commented on HDFS-5544:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1591 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1591/])
HDFS-5544. Adding Test case For Checking dfs.checksum.type as NULL value. 
Contributed by Sathish. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544596)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFSOutputSummer.java


 Adding Test case For Checking dfs.checksum type as NULL value
 -

 Key: HDFS-5544
 URL: https://issues.apache.org/jira/browse/HDFS-5544
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.1.0-beta
 Environment: HDFS-TEST
Reporter: sathish
Assignee: sathish
Priority: Minor
 Fix For: 3.0.0, 2.3.0, 2.2.1

 Attachments: HDFS-5544.patch


 https://issues.apache.org/jira/i#browse/HADOOP-9114,
 For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
 case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5525) Inline dust templates

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830655#comment-13830655
 ] 

Hudson commented on HDFS-5525:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1591 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1591/])
Move HDFS-5444 and HDFS-5525 to branch 2.3.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544631)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Inline dust templates
 -

 Key: HDFS-5525
 URL: https://issues.apache.org/jira/browse/HDFS-5525
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-5525.000.patch, HDFS-5525.000.patch, screenshot.png


 Currently the dust templates are stored as separate files on the server side. 
 The web UI has to make separate HTTP requests to load the templates, which 
 increases the network overheads and page load latency.
 This jira proposes to inline all dust templates with the main HTML file, so 
 that the page can be loaded faster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5444) Choose default web UI based on browser capabilities

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830654#comment-13830654
 ] 

Hudson commented on HDFS-5444:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1591 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1591/])
Move HDFS-5444 and HDFS-5525 to branch 2.3.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544631)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Choose default web UI based on browser capabilities
 ---

 Key: HDFS-5444
 URL: https://issues.apache.org/jira/browse/HDFS-5444
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-5444.000.patch, HDFS-5444.000.patch, 
 HDFS-5444.001.patch, Screenshot-new.png, Screenshot-old.png


 This jira changes the entrance of the web UI -- so that modern browsers with 
 JavaScript support are redirected to the new web UI, while other browsers 
 will automatically fall back to the old JSP based UI.
 It also add hyperlinks in both UIs to facilitate testings and evaluation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5444) Choose default web UI based on browser capabilities

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830658#comment-13830658
 ] 

Hudson commented on HDFS-5444:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1617 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1617/])
Move HDFS-5444 and HDFS-5525 to branch 2.3.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544631)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Choose default web UI based on browser capabilities
 ---

 Key: HDFS-5444
 URL: https://issues.apache.org/jira/browse/HDFS-5444
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-5444.000.patch, HDFS-5444.000.patch, 
 HDFS-5444.001.patch, Screenshot-new.png, Screenshot-old.png


 This jira changes the entrance of the web UI -- so that modern browsers with 
 JavaScript support are redirected to the new web UI, while other browsers 
 will automatically fall back to the old JSP based UI.
 It also add hyperlinks in both UIs to facilitate testings and evaluation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5544) Adding Test case For Checking dfs.checksum type as NULL value

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830656#comment-13830656
 ] 

Hudson commented on HDFS-5544:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1617 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1617/])
HDFS-5544. Adding Test case For Checking dfs.checksum.type as NULL value. 
Contributed by Sathish. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544596)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFSOutputSummer.java


 Adding Test case For Checking dfs.checksum type as NULL value
 -

 Key: HDFS-5544
 URL: https://issues.apache.org/jira/browse/HDFS-5544
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.1.0-beta
 Environment: HDFS-TEST
Reporter: sathish
Assignee: sathish
Priority: Minor
 Fix For: 3.0.0, 2.3.0, 2.2.1

 Attachments: HDFS-5544.patch


 https://issues.apache.org/jira/i#browse/HADOOP-9114,
 For checking the dfs.checksumtype as NULL,it is better  to add one unit test 
 case



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of Cluster summay in dfshealth.html

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830657#comment-13830657
 ] 

Hudson commented on HDFS-5552:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1617 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1617/])
HDFS-5552. Fix wrong information of Cluster summay in dfshealth.html. 
Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544627)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dust-helpers-1.1.1.min.js


 Fix wrong information of Cluster summay in dfshealth.html
 ---

 Key: HDFS-5552
 URL: https://issues.apache.org/jira/browse/HDFS-5552
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HDFS-5552.000.patch, dfshealth-html.png


 files and directories + blocks = total filesystem object(s). But wrong 
 value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5525) Inline dust templates

2013-11-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830659#comment-13830659
 ] 

Hudson commented on HDFS-5525:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1617 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1617/])
Move HDFS-5444 and HDFS-5525 to branch 2.3.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1544631)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Inline dust templates
 -

 Key: HDFS-5525
 URL: https://issues.apache.org/jira/browse/HDFS-5525
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-5525.000.patch, HDFS-5525.000.patch, screenshot.png


 Currently the dust templates are stored as separate files on the server side. 
 The web UI has to make separate HTTP requests to load the templates, which 
 increases the network overheads and page load latency.
 This jira proposes to inline all dust templates with the main HTML file, so 
 that the page can be loaded faster.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-23 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830682#comment-13830682
 ] 

Vinay commented on HDFS-5557:
-

Great finding Kihwal.
Patch looks quite good and test fails without actual fix.
I have following comments:

{code}+  DFSTestUtil.createFile(fileSys, file, 6800L, 
(short)numDataNodes, 0L);{code}
Here why you exactly used *6800L* ? wanted to write more than one block?
 If yes then you might have to set the block size to 64MB, default is 128M now 
in trunk.

bq. If the last block is completed, but the penultimate block is not because of 
this issue, the file won't be closed.
Better to add a test for this too, You can reproduce this failing the last 
packet of only penultimate block. for that you might need to change Mock 
statement in test and one more line in DFSOutputStream.java to following
{code}Mockito.when(faultInjector.failPacket()).thenReturn(true, false);{code}
and 
{code}if (isLastPacketInBlock
 DFSClientFaultInjector.get().failPacket()){code}


Also I looked at the patch HDFS-5558,
I think that will not solve the issue mentioned there i.e. crashing of 
LeaseManager monitor thread. that fix actually comes in flow of client's 
completeFile() call, not from lease recovery. That change might be required in 
this issue, to block the client committing the last block.

 Write pipeline recovery for the last packet in the block may cause rejection 
 of valid replicas
 --

 Key: HDFS-5557
 URL: https://issues.apache.org/jira/browse/HDFS-5557
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: HDFS-5557.patch, HDFS-5557.patch


 When a block is reported from a data node while the block is under 
 construction (i.e. not committed or completed), BlockManager calls 
 BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
 replica state. But BlockManager is calling it with the stored block, not 
 reported block.  This causes the recorded replicas' gen stamp to be that of 
 BlockInfoUnderConstruction itself, not the one from reported replica.
 When a pipeline recovery is done for the last packet of a block, the 
 incremental block reports with the new gen stamp may come before the client 
 calling updatePipeline(). If this happens, these replicas will be incorrectly 
 recorded with the old gen stamp and get removed later.  The result is close 
 or addAdditionalBlock failure.
 If the last block is completed, but the penultimate block is not because of 
 this issue, the file won't be closed. If this file is not cleared, but the 
 client goes away, the lease manager will try to recover the lease/block, at 
 which point it will crash. I will file a separate jira for this shortly.
 The worst case is to reject all good ones and accepting a bad one. In this 
 case, the block will get completed, but the data cannot be read until the 
 next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-23 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830691#comment-13830691
 ] 

Vinay commented on HDFS-5557:
-

bq. I think that will not solve the issue mentioned there i.e. crashing of 
LeaseManager monitor thread. that fix actually comes in flow of client's 
completeFile() call, not from lease recovery. That change might be required in 
this issue, to block the client committing the last block.
Oops. Sorry for not completely understanding the problem. lastBlock can be 
complete without completed penultimate block only if client calls 
completeFile() and fails.


 Write pipeline recovery for the last packet in the block may cause rejection 
 of valid replicas
 --

 Key: HDFS-5557
 URL: https://issues.apache.org/jira/browse/HDFS-5557
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: HDFS-5557.patch, HDFS-5557.patch


 When a block is reported from a data node while the block is under 
 construction (i.e. not committed or completed), BlockManager calls 
 BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
 replica state. But BlockManager is calling it with the stored block, not 
 reported block.  This causes the recorded replicas' gen stamp to be that of 
 BlockInfoUnderConstruction itself, not the one from reported replica.
 When a pipeline recovery is done for the last packet of a block, the 
 incremental block reports with the new gen stamp may come before the client 
 calling updatePipeline(). If this happens, these replicas will be incorrectly 
 recorded with the old gen stamp and get removed later.  The result is close 
 or addAdditionalBlock failure.
 If the last block is completed, but the penultimate block is not because of 
 this issue, the file won't be closed. If this file is not cleared, but the 
 client goes away, the lease manager will try to recover the lease/block, at 
 which point it will crash. I will file a separate jira for this shortly.
 The worst case is to reject all good ones and accepting a bad one. In this 
 case, the block will get completed, but the data cannot be read until the 
 next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5558) LeaseManager monitor thread can crash if the last block is complete but another block is not.

2013-11-23 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830694#comment-13830694
 ] 

Vinay commented on HDFS-5558:
-

I tried to reproduce the problem as mentioned with the help of test changes in 
HDFS-5557, but could not get the Invalid Cast Exception in trunk code. 
Instead, {color:red}*{{checkLeases()}} got stuck in infinite loop with fsn 
writelock held.*{color} Because checkLeases() will check repeatedly for the 
files until all files are renewed. Instead get all the expired leases and check 
once and return the call, check again after NAMENODE_LEASE_RECHECK_INTERVAL. if 
required this would be a separate Jira though.
As of now I am seeing only possible case could be HDFS-5557, which leads to 
this case.

 LeaseManager monitor thread can crash if the last block is complete but 
 another block is not.
 -

 Key: HDFS-5558
 URL: https://issues.apache.org/jira/browse/HDFS-5558
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-5558.branch-023.patch, HDFS-5558.patch


 As mentioned in HDFS-5557, if a file has its last and penultimate block not 
 completed and the file is being closed, the last block may be completed but 
 the penultimate one might not. If this condition lasts long and the file is 
 abandoned, LeaseManager will try to recover the lease and the block. But 
 {{internalReleaseLease()}} will fail with invalid cast exception with this 
 kind of file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5557) Write pipeline recovery for the last packet in the block may cause rejection of valid replicas

2013-11-23 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830738#comment-13830738
 ] 

Kihwal Lee commented on HDFS-5557:
--

bq. Here why you exactly used 6800L ? wanted to write more than one block? 
If yes then you might have to set the block size to 64MB, default is 128M now 
in trunk.

It doesn't have to write more than one block. I tried that, but the problem is 
reproduced without writing more than one block. But multiple packets is still 
better. It's an arbitrary number.

 Write pipeline recovery for the last packet in the block may cause rejection 
 of valid replicas
 --

 Key: HDFS-5557
 URL: https://issues.apache.org/jira/browse/HDFS-5557
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: HDFS-5557.patch, HDFS-5557.patch


 When a block is reported from a data node while the block is under 
 construction (i.e. not committed or completed), BlockManager calls 
 BlockInfoUnderConstruction.addReplicaIfNotPresent() to update the reported 
 replica state. But BlockManager is calling it with the stored block, not 
 reported block.  This causes the recorded replicas' gen stamp to be that of 
 BlockInfoUnderConstruction itself, not the one from reported replica.
 When a pipeline recovery is done for the last packet of a block, the 
 incremental block reports with the new gen stamp may come before the client 
 calling updatePipeline(). If this happens, these replicas will be incorrectly 
 recorded with the old gen stamp and get removed later.  The result is close 
 or addAdditionalBlock failure.
 If the last block is completed, but the penultimate block is not because of 
 this issue, the file won't be closed. If this file is not cleared, but the 
 client goes away, the lease manager will try to recover the lease/block, at 
 which point it will crash. I will file a separate jira for this shortly.
 The worst case is to reject all good ones and accepting a bad one. In this 
 case, the block will get completed, but the data cannot be read until the 
 next full block report containing one of the valid replicas is received.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2832:
-

Attachment: h2832_20131123.patch

The previous patch was not generated correctly -- it did not has HDFS-5559.

h2832_20131123.patch

 Enable support for heterogeneous storages in HDFS
 -

 Key: HDFS-2832
 URL: https://issues.apache.org/jira/browse/HDFS-2832
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
 editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
 h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
 h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
 h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
 h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
 h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
 h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
 h2832_20131121.patch, h2832_20131122.patch, h2832_20131122b.patch, 
 h2832_20131123.patch


 HDFS currently supports configuration where storages are a list of 
 directories. Typically each of these directories correspond to a volume with 
 its own file system. All these directories are homogeneous and therefore 
 identified as a single storage at the namenode. I propose, change to the 
 current model where Datanode * is a * storage, to Datanode * is a collection 
 * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830788#comment-13830788
 ] 

Hadoop QA commented on HDFS-2832:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615471/h2832_20131123.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 50 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5556//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5556//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5556//console

This message is automatically generated.

 Enable support for heterogeneous storages in HDFS
 -

 Key: HDFS-2832
 URL: https://issues.apache.org/jira/browse/HDFS-2832
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
 editsStored, h2832_20131023.patch, h2832_20131023b.patch, 
 h2832_20131025.patch, h2832_20131028.patch, h2832_20131028b.patch, 
 h2832_20131029.patch, h2832_20131103.patch, h2832_20131104.patch, 
 h2832_20131105.patch, h2832_20131107b.patch, h2832_20131108.patch, 
 h2832_20131110.patch, h2832_20131110b.patch, h2832_2013.patch, 
 h2832_20131112.patch, h2832_20131112b.patch, h2832_20131114.patch, 
 h2832_20131118.patch, h2832_20131119.patch, h2832_20131119b.patch, 
 h2832_20131121.patch, h2832_20131122.patch, h2832_20131122b.patch, 
 h2832_20131123.patch


 HDFS currently supports configuration where storages are a list of 
 directories. Typically each of these directories correspond to a volume with 
 its own file system. All these directories are homogeneous and therefore 
 identified as a single storage at the namenode. I propose, change to the 
 current model where Datanode * is a * storage, to Datanode * is a collection 
 * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5559) Fix TestDatanodeConfig in HDFS-2832

2013-11-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5559.
-

   Resolution: Fixed
Fix Version/s: Heterogeneous Storage (HDFS-2832)
 Hadoop Flags: Reviewed

+1 for the patch, I committed it to branch HDFS-2832. Thanks Nicholas!

 Fix TestDatanodeConfig in HDFS-2832
 ---

 Key: HDFS-5559
 URL: https://issues.apache.org/jira/browse/HDFS-5559
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: Heterogeneous Storage (HDFS-2832)

 Attachments: h5559_20131122.patch


 HDFS-5542 breaks this test.



--
This message was sent by Atlassian JIRA
(v6.1#6144)