[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278383#comment-14278383
 ] 

Chen He commented on HDFS-7606:
---

To reproduce the NPE mentioned in this JIRA, in the INodeFile.getBlocks(), we 
just need to satisfy " diff=null "and "snapshotBlocks=null" before line 435 
starts to execute. Then the diff.getSnapshotId() will cause NPE.

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
> Attachments: HDFS-7606.patch
>
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278335#comment-14278335
 ] 

Chen He commented on HDFS-7606:
---

I am working on the unit test code to reproduce this NPE.

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
> Attachments: HDFS-7606.patch
>
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278325#comment-14278325
 ] 

Konstantin Shvachko commented on HDFS-7606:
---

Hey Chen, if I am not mistaken the NPE in getBlocks() is hypothetcial. I don't 
think Ted actually reproduced it or if it is reproducible with current code.

Byron,
# should we just use {{snapshot}} instead of {{diff.getSnapshotId()}}? I think 
it is the same id.
# For {{computeContentSummary()}} the null check seems to be redundant, as {{n 
> 0}} is guaranteed by the {{if}} statement.

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
> Attachments: HDFS-7606.patch
>
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278317#comment-14278317
 ] 

Hadoop QA commented on HDFS-7057:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692416/HDFS-7057.patch
  against trunk revision 5805dc0.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-azure hadoop-tools/hadoop-distcp 
hadoop-tools/hadoop-gridmix hadoop-tools/hadoop-openstack:

  org.apache.hadoop.hdfs.server.balancer.TestBalancer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9220//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9220//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9220//console

This message is automatically generated.

> Expose truncate API via FileSystem and shell command
> 
>
> Key: HDFS-7057
> URL: https://issues.apache.org/jira/browse/HDFS-7057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Milan Desai
> Attachments: HDFS-7057.patch
>
>
> Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7613) Block placement policy for erasure coding groups

2015-01-14 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7613:
-
Assignee: Zhe Zhang  (was: Fei Hu)

> Block placement policy for erasure coding groups
> 
>
> Key: HDFS-7613
> URL: https://issues.apache.org/jira/browse/HDFS-7613
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> Blocks in an erasure coding group should be placed in different failure 
> domains -- different DataNodes at the minimum, and different racks ideally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7285) Erasure Coding Support inside HDFS

2015-01-14 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7285:
-
Assignee: Zhe Zhang  (was: Fei Hu)

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7189) Add trace spans for DFSClient metadata operations

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278264#comment-14278264
 ] 

Hadoop QA commented on HDFS-7189:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692406/HDFS-7189.006.patch
  against trunk revision 6464a89.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1210 javac 
compiler warnings (more than the trunk's current 1206 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
  org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9219//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9219//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9219//console

This message is automatically generated.

> Add trace spans for DFSClient metadata operations
> -
>
> Key: HDFS-7189
> URL: https://issues.apache.org/jira/browse/HDFS-7189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7189.001.patch, HDFS-7189.003.patch, 
> HDFS-7189.004.patch, HDFS-7189.005.patch, HDFS-7189.006.patch
>
>
> We should add trace spans for DFSClient metadata operations.  For example, 
> {{DFSClient#rename}} should have a trace span, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278254#comment-14278254
 ] 

zhangshilong commented on HDFS-7608:


sorry, network so slow..

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278239#comment-14278239
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278243#comment-14278243
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7005) DFS input streams do not timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278234#comment-14278234
 ] 

zhangshilong commented on HDFS-7005:


there is no write timeout. why not add writeTimeout?

> DFS input streams do not timeout
> 
>
> Key: HDFS-7005
> URL: https://issues.apache.org/jira/browse/HDFS-7005
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.5.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: HDFS-7005.patch
>
>
> Input streams lost their timeout.  The problem appears to be 
> {{DFSClient#newConnectedPeer}} does not set the read timeout.  During a 
> temporary network interruption the server will close the socket, unbeknownst 
> to the client host, which blocks on a read forever.
> The results are dire.  Services such as the RM, JHS, NMs, oozie servers, etc 
> all need to be restarted to recover - unless you want to wait many hours for 
> the tcp stack keepalive to detect the broken socket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278241#comment-14278241
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278237#comment-14278237
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278235#comment-14278235
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278244#comment-14278244
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278240#comment-14278240
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278238#comment-14278238
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278242#comment-14278242
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no read or write timeout

2015-01-14 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278236#comment-14278236
 ] 

zhangshilong commented on HDFS-7608:


there is no write timeout. why not add writeTimeout?

> hdfs dfsclient  newConnectedPeer has no read or write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient, fuse-dfs
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>  Labels: patch
> Fix For: 2.6.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7285) Erasure Coding Support inside HDFS

2015-01-14 Thread Fei Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hu reassigned HDFS-7285:


Assignee: Fei Hu  (was: Zhe Zhang)

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Fei Hu
> Attachments: HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278221#comment-14278221
 ] 

Hadoop QA commented on HDFS-7606:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692408/HDFS-7606.patch
  against trunk revision 6464a89.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDatanodeReport

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9218//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9218//console

This message is automatically generated.

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
> Attachments: HDFS-7606.patch
>
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278145#comment-14278145
 ] 

Hadoop QA commented on HDFS-7496:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692374/HDFS-7496.005.patch
  against trunk revision 6464a89.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9217//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9217//console

This message is automatically generated.

> Fix FsVolume removal race conditions on the DataNode 
> -
>
> Key: HDFS-7496
> URL: https://issues.apache.org/jira/browse/HDFS-7496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
> HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, 
> HDFS-7496.004.patch, HDFS-7496.005.patch
>
>
> We discussed a few FsVolume removal race conditions on the DataNode in 
> HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278136#comment-14278136
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3689:
---

Thanks for working on this!  Some comments so far:
- Instead of adding append2, how about adding another append method with a 
boolean appendToNewBlock parameter?  The original append could just call it 
with appendToNewBlock=false.  We also don't need 
Append2RequestProto/Append2ResponseProto.  Just add an optional field to 
AppendRequestProto/AppendResponseProto.
- We could also add appendToNewBlock to DFSOutputStream constructor to reduce 
code duplication.
- Typo in the code below?  Should it be  flushBuffer(!endBlock, true)?
{code}
//DFSOutputStream.flushOrSync
-// flush checksum buffer, but keep checksum buffer intact
-int numKept = flushBuffer(true, true);
+// flush checksum buffer, but keep checksum buffer intact if we do not
+// need to end the current block
+int numKept = flushBuffer(true, !endBlock);
{code}

> Add support for variable length block
> -
>
> Key: HDFS-3689
> URL: https://issues.apache.org/jira/browse/HDFS-3689
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
> HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
> HDFS-3689.004.patch, HDFS-3689.005.patch
>
>
> Currently HDFS supports fixed length blocks. Supporting variable length block 
> will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278134#comment-14278134
 ] 

Allen Wittenauer commented on HDFS-7057:


It seems like the API has a huge problem if all of these other file systems 
require overrides to throw an unsupported exception.  It also makes me very 
concerned that we've broken 3rd party file systems.

> Expose truncate API via FileSystem and shell command
> 
>
> Key: HDFS-7057
> URL: https://issues.apache.org/jira/browse/HDFS-7057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Milan Desai
> Attachments: HDFS-7057.patch
>
>
> Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278120#comment-14278120
 ] 

Colin Patrick McCabe commented on HDFS-7496:


+1 pending jenkins

> Fix FsVolume removal race conditions on the DataNode 
> -
>
> Key: HDFS-7496
> URL: https://issues.apache.org/jira/browse/HDFS-7496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
> HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, 
> HDFS-7496.004.patch, HDFS-7496.005.patch
>
>
> We discussed a few FsVolume removal race conditions on the DataNode in 
> HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278114#comment-14278114
 ] 

Colin Patrick McCabe commented on HDFS-7411:


You can compute blocks per interval from blocks per second and interval length, 
both of which are specified.  And in any case, we may eventually need to start 
releasing the lock occasionally during "decom intervals" as the number of 
blocks continues to double year-over-year.  The fact that we hold the lock 
throughout the whole interval is an implementation detail.  Anyway, I don't 
feel that strongly about this so if you want to keep it as a per-interval 
config, that's probably ok.

> Refactor and improve decommissioning logic into DecommissionManager
> ---
>
> Key: HDFS-7411
> URL: https://issues.apache.org/jira/browse/HDFS-7411
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
> hdfs-7411.006.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7609) startup used too much time to load edits

2015-01-14 Thread Carrey Zhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carrey Zhan updated HDFS-7609:
--
Attachment: recovery_do_not_use_retrycache.patch

attach a small patch for 2.2.0 just to disable retry cache during recover 
process.

> startup used too much time to load edits
> 
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
> Attachments: recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-14 Thread Milan Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Milan Desai updated HDFS-7057:
--
Attachment: HDFS-7057.patch

Attaching a patch that exposes truncate to FileSystem and adds a shell command. 
The following FileSystem implementations support truncate: 
DistributedFileSystem, RawLocalFileSystem, and FilterFileSystem. 
HttpFSFileSystem and WebHdfsFileSystem have TODO comments for their 
implementation.

The shell command usage for truncate is "truncate [-w]   ..." 
where the -w option requests the command to wait for block recovery, if 
necessary. I tested the shell command on a standalone cluster.

> Expose truncate API via FileSystem and shell command
> 
>
> Key: HDFS-7057
> URL: https://issues.apache.org/jira/browse/HDFS-7057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Milan Desai
> Attachments: HDFS-7057.patch
>
>
> Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-14 Thread Milan Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Milan Desai updated HDFS-7057:
--
Status: Patch Available  (was: In Progress)

> Expose truncate API via FileSystem and shell command
> 
>
> Key: HDFS-7057
> URL: https://issues.apache.org/jira/browse/HDFS-7057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Milan Desai
> Attachments: HDFS-7057.patch
>
>
> Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278090#comment-14278090
 ] 

Chen He commented on HDFS-7606:
---

Hi [~Byron Wong], thank you for your work, it will be great if you can add the 
unit test that can reproduce the NPE

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
> Attachments: HDFS-7606.patch
>
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) startup used too much time to load edits

2015-01-14 Thread Carrey Zhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278085#comment-14278085
 ] 

Carrey Zhan commented on HDFS-7609:
---

Yes you are right. I know the meaning of retry cache for restart and failover.
What about recover? How about disable it during recover and let the namenode 
can return to work quickly.

> startup used too much time to load edits
> 
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-14 Thread Milan Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7057 started by Milan Desai.
-
> Expose truncate API via FileSystem and shell command
> 
>
> Key: HDFS-7057
> URL: https://issues.apache.org/jira/browse/HDFS-7057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Milan Desai
>
> Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Byron Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Byron Wong updated HDFS-7606:
-
Status: Patch Available  (was: Open)

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
> Attachments: HDFS-7606.patch
>
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Byron Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Byron Wong updated HDFS-7606:
-
Attachment: HDFS-7606.patch

Attached patch.
Addesses the two spots that [~tedyu] found.

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
> Attachments: HDFS-7606.patch
>
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7189) Add trace spans for DFSClient metadata operations

2015-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7189:
---
Attachment: HDFS-7189.006.patch

Fix findbugs warnings

> Add trace spans for DFSClient metadata operations
> -
>
> Key: HDFS-7189
> URL: https://issues.apache.org/jira/browse/HDFS-7189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7189.001.patch, HDFS-7189.003.patch, 
> HDFS-7189.004.patch, HDFS-7189.005.patch, HDFS-7189.006.patch
>
>
> We should add trace spans for DFSClient metadata operations.  For example, 
> {{DFSClient#rename}} should have a trace span, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278050#comment-14278050
 ] 

Hadoop QA commented on HDFS-3689:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692346/HDFS-3689.005.patch
  against trunk revision 6464a89.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs-nfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9214//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9214//console

This message is automatically generated.

> Add support for variable length block
> -
>
> Key: HDFS-3689
> URL: https://issues.apache.org/jira/browse/HDFS-3689
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
> HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
> HDFS-3689.004.patch, HDFS-3689.005.patch
>
>
> Currently HDFS supports fixed length blocks. Supporting variable length block 
> will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7575) NameNode not handling heartbeats properly after HDFS-2832

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278041#comment-14278041
 ] 

Hadoop QA commented on HDFS-7575:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692337/HDFS-7575.05.patch
  against trunk revision 7fe0f25.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.ha.TestZKFailoverControllerStress
  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs:

org.apache.hadoop.ha.TestZKFailoverControllerStress
org.apache.hadoop.hdfs.server.mover.TestStorageMover

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9213//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9213//console

This message is automatically generated.

> NameNode not handling heartbeats properly after HDFS-2832
> -
>
> Key: HDFS-7575
> URL: https://issues.apache.org/jira/browse/HDFS-7575
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0, 2.5.0, 2.6.0
>Reporter: Lars Francke
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-7575.01.patch, HDFS-7575.02.patch, 
> HDFS-7575.03.binary.patch, HDFS-7575.03.patch, HDFS-7575.04.binary.patch, 
> HDFS-7575.04.patch, HDFS-7575.05.binary.patch, HDFS-7575.05.patch, 
> testUpgrade22via24GeneratesStorageIDs.tgz, 
> testUpgradeFrom22GeneratesStorageIDs.tgz, 
> testUpgradeFrom24PreservesStorageId.tgz
>
>
> Before HDFS-2832 each DataNode would have a unique storageId which included 
> its IP address. Since HDFS-2832 the DataNodes have a unique storageId per 
> storage directory which is just a random UUID.
> They send reports per storage directory in their heartbeats. This heartbeat 
> is processed on the NameNode in the 
> {{DatanodeDescriptor#updateHeartbeatState}} method. Pre HDFS-2832 this would 
> just store the information per Datanode. After the patch though each DataNode 
> can have multiple different storages so it's stored in a map keyed by the 
> storage Id.
> This works fine for all clusters that have been installed post HDFS-2832 as 
> they get a UUID for their storage Id. So a DN with 8 drives has a map with 8 
> different keys. On each Heartbeat the Map is searched and updated 
> ({{DatanodeStorageInfo storage = storageMap.get(s.getStorageID());}}):
> {code:title=DatanodeStorageInfo}
>   void updateState(StorageReport r) {
> capacity = r.getCapacity();
> dfsUsed = r.getDfsUsed();
> remaining = r.getRemaining();
> blockPoolUsed = r.getBlockPoolUsed();
>   }
> {code}
> On clusters that were upgraded from a pre HDFS-2832 version though the 
> storage Id has not been rewritten (at least not on the four clusters I 
> checked) so each directory will have the exact same storageId. That means 
> there'll be only a single entry in the {{storageMap}} and it'll be 
> overwritten by a random {{StorageReport}} from the DataNode. This can be seen 
> in the {{updateState}} method above. This just assigns the capacity from the 
> received report, instead it should probably sum it up per received heartbeat.
> The Balancer seems to be one of the only things that actually uses this 
> information so it now considers the utilization of a random drive per 
> DataNode for balancing purposes.
> Things get even worse when a drive has been added or replaced as this will 
> now get a new storage Id so there'll be two entries in the storageMap. As new 
> drives are usually empty it skewes the balancers decision in a way that this 
> node will never be considered over-utilized.
> Another problem is that 

[jira] [Commented] (HDFS-7613) Block placement policy for erasure coding groups

2015-01-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278016#comment-14278016
 ] 

Zhe Zhang commented on HDFS-7613:
-

[~Fei Hu] Thank you for being interested in working on this JIRA. I think the 
new placement policy should extend {{BlockPlacementPolicy}} just like 
{{BlockPlacementPolicyWithNodeGroup}}.

> Block placement policy for erasure coding groups
> 
>
> Key: HDFS-7613
> URL: https://issues.apache.org/jira/browse/HDFS-7613
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Fei Hu
>
> Blocks in an erasure coding group should be placed in different failure 
> domains -- different DataNodes at the minimum, and different racks ideally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7613) Block placement policy for erasure coding groups

2015-01-14 Thread Fei Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hu reassigned HDFS-7613:


Assignee: Fei Hu  (was: Zhe Zhang)

> Block placement policy for erasure coding groups
> 
>
> Key: HDFS-7613
> URL: https://issues.apache.org/jira/browse/HDFS-7613
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Fei Hu
>
> Blocks in an erasure coding group should be placed in different failure 
> domains -- different DataNodes at the minimum, and different racks ideally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277967#comment-14277967
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

> ... Remaining all requests will anyway will be rejected since the initial 
> state will be STANDBY. ...

Some methods such as saveNamespace() and refreshNodes are 
OperationCategory.UNCHECKED operations so that standby nn should serve them.

Some other methods such as blockReceivedAndDeleted(), 
refreshUserToGroupsMappings() and addSpanReceiver() do not check 
OperationCategory.  Some of them probably are bugs.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7614) Implement COMPLETE state of erasure coding block groups

2015-01-14 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-7614:
---

 Summary: Implement COMPLETE state of erasure coding block groups
 Key: HDFS-7614
 URL: https://issues.apache.org/jira/browse/HDFS-7614
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang


HDFS-7339 implements 2 states of an under-construction block group: 
{{UNDER_CONSTRUCTION}} and {{COMMITTED}}. The  {{COMPLETE}} requires DataNode 
to report stored replicas, therefore will be separately implemented in this 
JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7587) Edit log corruption can happen if append fails with a quota violation

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277943#comment-14277943
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7587:
---

{code}
+// MUST attempt quota update before changing in-memory states
+updateQuotaForAppend(iip, file);
...
+// may fail if block token creation fails, but we're still in a
+// consistent state if the edit is logged first
+return blockManager.convertLastBlockToUnderConstruction(file, 0);
{code}
I think we should use FSDirectory.verifyQuota(..) (instead updating quote) in 
the beginning and then update quota at the end.  Otherwise, the quote counts 
will be incorrect if there is an exception thrown later on.

> Edit log corruption can happen if append fails with a quota violation
> -
>
> Key: HDFS-7587
> URL: https://issues.apache.org/jira/browse/HDFS-7587
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-7587.patch
>
>
> We have seen a standby namenode crashing due to edit log corruption. It was 
> complaining that {{OP_CLOSE}} cannot be applied because the file is not 
> under-construction.
> When a client was trying to append to the file, the remaining space quota was 
> very small. This caused a failure in {{prepareFileForWrite()}}, but after the 
> inode was already converted for writing and a lease added. Since these were 
> not undone when the quota violation was detected, the file was left in 
> under-construction with an active lease without edit logging {{OP_ADD}}.
> A subsequent {{append()}} eventually caused a lease recovery after the soft 
> limit period. This resulted in {{commitBlockSynchronization()}}, which closed 
> the file with {{OP_CLOSE}} being logged.  Since there was no corresponding 
> {{OP_ADD}}, edit replaying could not apply this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7606:
--
 Target Version/s: 3.0.0
Affects Version/s: 3.0.0
 Assignee: Byron Wong

Ted, thanks for reporting. Sounds like a potential problem indeed.

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Byron Wong
>Priority: Minor
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2015-01-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277909#comment-14277909
 ] 

Andrew Wang commented on HDFS-7411:
---

bq. Why is blocks.per.interval "more powerful" than blocks per minute?

I don't think the end goal is to achieve a certain rate per minute. Rather, 
it's how long the pause is when the DecomManager wakes up, and how often it 
wakes up. This tunes latency vs. throughput; short pause is better latency, 
long run is better throughput. This can't be expressed by just 
blocks.per.minute, since a high blocks.per.minute might mean to wake up very 
often to do a little work, or very occasionally to do a lot of work.

It also fixes the timescale to "per minute". This, naively, implies that it'd 
be okay to wake up once a minute to do a minute's worth of work. But maybe the 
user wants to see something happen within a few seconds, rather than a minute. 
Without being able to tune the interval, this flexibility is gone.

The event triggered idea is also something I considered, but even then we'd 
still need to do the full scan at the start of decom, which means some kind of 
limiting scheme.

> Refactor and improve decommissioning logic into DecommissionManager
> ---
>
> Key: HDFS-7411
> URL: https://issues.apache.org/jira/browse/HDFS-7411
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
> hdfs-7411.006.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7610) Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277903#comment-14277903
 ] 

Hadoop QA commented on HDFS-7610:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692341/HDFS-7610.000.patch
  against trunk revision 6464a89.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.TestFileCreation

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9215//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9215//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9215//console

This message is automatically generated.

> Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl
> -
>
> Key: HDFS-7610
> URL: https://issues.apache.org/jira/browse/HDFS-7610
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7610.000.patch
>
>
> In the hot swap feature, {{FsDatasetImpl#addVolume}} uses the base volume dir 
> (e.g. "{{/foo/data0}}", instead of volume's current dir 
> "{{/foo/data/current}}" to construct {{FsVolumeImpl}}. As a result, DataNode 
> can not remove this newly added volume, because its 
> {{FsVolumeImpl#getBasePath}} returns "{{/foo}}".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7575) NameNode not handling heartbeats properly after HDFS-2832

2015-01-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277895#comment-14277895
 ] 

Daryn Sharp commented on HDFS-7575:
---

bq. I think it's frustrating for storage IDs to change without warning just 
because HDFS was restarted.  It will make diagnosing problems by reading log 
files harder because storageIDs might morph at any time. It also sets a bad 
precedent of not allowing downgrade and modifying VERSION files "on the fly" 
during startup.

I'm confused.  StorageIDs aren't going to repeatedly morph - unless there's a 
UUID collision that you argue can't happen.  The important part is you always 
want unique storage ids.  It's an internal default of hdfs that is not up to 
the user to assign.  Succinctly stated, what I'd like is for storage ids to be 
generated if missing, re-generated if incorrectly formatted, or if there are 
dups.  I think the latest patch actually does the first two, just not the dup 
check.

bq.  I'm surprised to hear you say that rollback should not be an option. It 
seems like the conservative thing to do here is to allow the user to restore to 
the VERSION file. Obviously we believe there will be no problems. But we always 
believe that, or else we wouldn't have made the change. Sometimes there are 
problems.

I didn't say that.  Rollback is for reverting an incompatible change.  Changing 
the storage id is not incompatible.  Unique ids are the default for newly 
formatted nodes.  If you think unique storage ids may have subtle bugs 
(different than shared storage ids), then new clusters or newly formatted nodes 
are buggy.

> NameNode not handling heartbeats properly after HDFS-2832
> -
>
> Key: HDFS-7575
> URL: https://issues.apache.org/jira/browse/HDFS-7575
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0, 2.5.0, 2.6.0
>Reporter: Lars Francke
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-7575.01.patch, HDFS-7575.02.patch, 
> HDFS-7575.03.binary.patch, HDFS-7575.03.patch, HDFS-7575.04.binary.patch, 
> HDFS-7575.04.patch, HDFS-7575.05.binary.patch, HDFS-7575.05.patch, 
> testUpgrade22via24GeneratesStorageIDs.tgz, 
> testUpgradeFrom22GeneratesStorageIDs.tgz, 
> testUpgradeFrom24PreservesStorageId.tgz
>
>
> Before HDFS-2832 each DataNode would have a unique storageId which included 
> its IP address. Since HDFS-2832 the DataNodes have a unique storageId per 
> storage directory which is just a random UUID.
> They send reports per storage directory in their heartbeats. This heartbeat 
> is processed on the NameNode in the 
> {{DatanodeDescriptor#updateHeartbeatState}} method. Pre HDFS-2832 this would 
> just store the information per Datanode. After the patch though each DataNode 
> can have multiple different storages so it's stored in a map keyed by the 
> storage Id.
> This works fine for all clusters that have been installed post HDFS-2832 as 
> they get a UUID for their storage Id. So a DN with 8 drives has a map with 8 
> different keys. On each Heartbeat the Map is searched and updated 
> ({{DatanodeStorageInfo storage = storageMap.get(s.getStorageID());}}):
> {code:title=DatanodeStorageInfo}
>   void updateState(StorageReport r) {
> capacity = r.getCapacity();
> dfsUsed = r.getDfsUsed();
> remaining = r.getRemaining();
> blockPoolUsed = r.getBlockPoolUsed();
>   }
> {code}
> On clusters that were upgraded from a pre HDFS-2832 version though the 
> storage Id has not been rewritten (at least not on the four clusters I 
> checked) so each directory will have the exact same storageId. That means 
> there'll be only a single entry in the {{storageMap}} and it'll be 
> overwritten by a random {{StorageReport}} from the DataNode. This can be seen 
> in the {{updateState}} method above. This just assigns the capacity from the 
> received report, instead it should probably sum it up per received heartbeat.
> The Balancer seems to be one of the only things that actually uses this 
> information so it now considers the utilization of a random drive per 
> DataNode for balancing purposes.
> Things get even worse when a drive has been added or replaced as this will 
> now get a new storage Id so there'll be two entries in the storageMap. As new 
> drives are usually empty it skewes the balancers decision in a way that this 
> node will never be considered over-utilized.
> Another problem is that old StorageReports are never removed from the 
> storageMap. So if I replace a drive and it gets a new storage Id the old one 
> will still be in place and used for all calculations by the Balancer until a 
> restart of the NameNode.
> I can try providing a patch that does the following:
> * Instead of using a Map I could just store the array we receive or instead 

[jira] [Updated] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-14 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7496:

Attachment: HDFS-7496.005.patch

Thanks for the quick review, [~cmccabe]

I have renamed {{ReplicaInPipelineWithVolumeReference}} to {{ReplicaHandler}} 
as you suggested. 



> Fix FsVolume removal race conditions on the DataNode 
> -
>
> Key: HDFS-7496
> URL: https://issues.apache.org/jira/browse/HDFS-7496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
> HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, 
> HDFS-7496.004.patch, HDFS-7496.005.patch
>
>
> We discussed a few FsVolume removal race conditions on the DataNode in 
> HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277867#comment-14277867
 ] 

Hadoop QA commented on HDFS-3443:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692158/HDFS-3443-003.patch
  against trunk revision 6464a89.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestParallelShortCircuitRead
  org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
  org.apache.hadoop.hdfs.server.namenode.TestAllowFormat
  
org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
  org.apache.hadoop.hdfs.TestBlockStoragePolicy
  org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
  
org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
  org.apache.hadoop.hdfs.TestEncryptedTransfer
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
  org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
  org.apache.hadoop.hdfs.TestSnapshotCommands
  org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
  org.apache.hadoop.hdfs.TestRead
  
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
  org.apache.hadoop.hdfs.TestBlocksScheduledCounter
  
org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.TestDFSPermission
  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
  org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
  org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
  org.apache.hadoop.hdfs.tools.TestGetGroups
  org.apache.hadoop.hdfs.server.namenode.TestStartup
  
org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
  org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr
  org.apache.hadoop.hdfs.TestMultiThreadedHflush
  org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
  org.apache.hadoop.hdfs.TestDFSClientFailover
  org.apache.hadoop.hdfs.TestBlockReaderLocal
  org.apache.hadoop.cli.TestCacheAdminCLI
  org.apache.hadoop.hdfs.server.mover.TestMover
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation
  
org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
  
org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
  
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
  org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
  
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
  org.apache.hadoop.hdfs.TestLeaseRecovery2
  org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithAcl
  
org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
  org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
  org.apache.hadoop.fs.TestFcHdfsSetUMask
  org.apache.hadoop.hdfs.TestPread
  org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
  
org.apache

[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277851#comment-14277851
 ] 

Colin Patrick McCabe commented on HDFS-7496:


Thanks, [~eddyxu].  This looks very good.

My only comment is that {{ReplicaInPipelineWithVolumeReference}} is a long 
name.  What if it were called {{ReplicaHandle}} or something?  Also, if this 
object implemented {{Closeable}}, this might help us avoid leaks.  Then in the 
{{BlockReceiver}}, we could hold on to the {{ReplicaHandle}} and call 
{{ReplicaHandle#close}} when necessary.

+1 once that's addressed

> Fix FsVolume removal race conditions on the DataNode 
> -
>
> Key: HDFS-7496
> URL: https://issues.apache.org/jira/browse/HDFS-7496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
> HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, 
> HDFS-7496.004.patch
>
>
> We discussed a few FsVolume removal race conditions on the DataNode in 
> HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7339) Allocating and persisting block groups in NameNode

2015-01-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7339:

Summary: Allocating and persisting block groups in NameNode  (was: NameNode 
support for erasure coding block groups)

> Allocating and persisting block groups in NameNode
> --
>
> Key: HDFS-7339
> URL: https://issues.apache.org/jira/browse/HDFS-7339
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-7339-001.patch, HDFS-7339-002.patch, 
> Meta-striping.jpg, NN-stripping.jpg
>
>
> All erasure codec operations center around the concept of _block group_; they 
> are formed in initial encoding and looked up in recoveries and conversions. A 
> lightweight class {{BlockGroup}} is created to record the original and parity 
> blocks in a coding group, as well as a pointer to the codec schema (pluggable 
> codec schemas will be supported in HDFS-7337). With the striping layout, the 
> HDFS client needs to operate on all blocks in a {{BlockGroup}} concurrently. 
> Therefore we propose to extend a file’s inode to switch between _contiguous_ 
> and _striping_ modes, with the current mode recorded in a binary flag. An 
> array of BlockGroups (or BlockGroup IDs) is added, which remains empty for 
> “traditional” HDFS files with contiguous block layout.
> The NameNode creates and maintains {{BlockGroup}} instances through the new 
> {{ECManager}} component; the attached figure has an illustration of the 
> architecture. As a simple example, when a {_Striping+EC_} file is created and 
> written to, it will serve requests from the client to allocate new 
> {{BlockGroups}} and store them under the {{INodeFile}}. In the current phase, 
> {{BlockGroups}} are allocated both in initial online encoding and in the 
> conversion from replication to EC. {{ECManager}} also facilitates the lookup 
> of {{BlockGroup}} information for block recovery work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277832#comment-14277832
 ] 

Hadoop QA commented on HDFS-3689:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692294/HDFS-3689.004.patch
  against trunk revision d336d13.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.ha.TestZKFailoverControllerStress
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9210//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9210//console

This message is automatically generated.

> Add support for variable length block
> -
>
> Key: HDFS-3689
> URL: https://issues.apache.org/jira/browse/HDFS-3689
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
> HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
> HDFS-3689.004.patch, HDFS-3689.005.patch
>
>
> Currently HDFS supports fixed length blocks. Supporting variable length block 
> will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7189) Add trace spans for DFSClient metadata operations

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277826#comment-14277826
 ] 

Hadoop QA commented on HDFS-7189:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692303/HDFS-7189.005.patch
  against trunk revision 446545c.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1220 javac 
compiler warnings (more than the trunk's current 1206 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9211//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9211//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9211//console

This message is automatically generated.

> Add trace spans for DFSClient metadata operations
> -
>
> Key: HDFS-7189
> URL: https://issues.apache.org/jira/browse/HDFS-7189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7189.001.patch, HDFS-7189.003.patch, 
> HDFS-7189.004.patch, HDFS-7189.005.patch
>
>
> We should add trace spans for DFSClient metadata operations.  For example, 
> {{DFSClient#rename}} should have a trace span, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-14 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277820#comment-14277820
 ] 

Lei (Eddy) Xu commented on HDFS-7496:
-

This test failure ({{TestPipelinesFailover}}) is not related, as it also fails 
in trunk.

> Fix FsVolume removal race conditions on the DataNode 
> -
>
> Key: HDFS-7496
> URL: https://issues.apache.org/jira/browse/HDFS-7496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
> HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, 
> HDFS-7496.004.patch
>
>
> We discussed a few FsVolume removal race conditions on the DataNode in 
> HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7613) Block placement policy for erasure coding groups

2015-01-14 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-7613:
---

 Summary: Block placement policy for erasure coding groups
 Key: HDFS-7613
 URL: https://issues.apache.org/jira/browse/HDFS-7613
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang


Blocks in an erasure coding group should be placed in different failure domains 
-- different DataNodes at the minimum, and different racks ideally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7575) NameNode not handling heartbeats properly after HDFS-2832

2015-01-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277794#comment-14277794
 ] 

Arpit Agarwal commented on HDFS-7575:
-

I prefer a layout version bump per my original patch, if for no other reason 
than the fact that the DataNode upgrade path is complicated enough without 
having to think about OOB metadata changes. In this case the metadata change is 
limited so I'd be okay with making the exception.

> NameNode not handling heartbeats properly after HDFS-2832
> -
>
> Key: HDFS-7575
> URL: https://issues.apache.org/jira/browse/HDFS-7575
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0, 2.5.0, 2.6.0
>Reporter: Lars Francke
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-7575.01.patch, HDFS-7575.02.patch, 
> HDFS-7575.03.binary.patch, HDFS-7575.03.patch, HDFS-7575.04.binary.patch, 
> HDFS-7575.04.patch, HDFS-7575.05.binary.patch, HDFS-7575.05.patch, 
> testUpgrade22via24GeneratesStorageIDs.tgz, 
> testUpgradeFrom22GeneratesStorageIDs.tgz, 
> testUpgradeFrom24PreservesStorageId.tgz
>
>
> Before HDFS-2832 each DataNode would have a unique storageId which included 
> its IP address. Since HDFS-2832 the DataNodes have a unique storageId per 
> storage directory which is just a random UUID.
> They send reports per storage directory in their heartbeats. This heartbeat 
> is processed on the NameNode in the 
> {{DatanodeDescriptor#updateHeartbeatState}} method. Pre HDFS-2832 this would 
> just store the information per Datanode. After the patch though each DataNode 
> can have multiple different storages so it's stored in a map keyed by the 
> storage Id.
> This works fine for all clusters that have been installed post HDFS-2832 as 
> they get a UUID for their storage Id. So a DN with 8 drives has a map with 8 
> different keys. On each Heartbeat the Map is searched and updated 
> ({{DatanodeStorageInfo storage = storageMap.get(s.getStorageID());}}):
> {code:title=DatanodeStorageInfo}
>   void updateState(StorageReport r) {
> capacity = r.getCapacity();
> dfsUsed = r.getDfsUsed();
> remaining = r.getRemaining();
> blockPoolUsed = r.getBlockPoolUsed();
>   }
> {code}
> On clusters that were upgraded from a pre HDFS-2832 version though the 
> storage Id has not been rewritten (at least not on the four clusters I 
> checked) so each directory will have the exact same storageId. That means 
> there'll be only a single entry in the {{storageMap}} and it'll be 
> overwritten by a random {{StorageReport}} from the DataNode. This can be seen 
> in the {{updateState}} method above. This just assigns the capacity from the 
> received report, instead it should probably sum it up per received heartbeat.
> The Balancer seems to be one of the only things that actually uses this 
> information so it now considers the utilization of a random drive per 
> DataNode for balancing purposes.
> Things get even worse when a drive has been added or replaced as this will 
> now get a new storage Id so there'll be two entries in the storageMap. As new 
> drives are usually empty it skewes the balancers decision in a way that this 
> node will never be considered over-utilized.
> Another problem is that old StorageReports are never removed from the 
> storageMap. So if I replace a drive and it gets a new storage Id the old one 
> will still be in place and used for all calculations by the Balancer until a 
> restart of the NameNode.
> I can try providing a patch that does the following:
> * Instead of using a Map I could just store the array we receive or instead 
> of storing an array sum up the values for reports with the same Id
> * On each heartbeat clear the map (so we know we have up to date information)
> Does that sound sensible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7575) NameNode not handling heartbeats properly after HDFS-2832

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277795#comment-14277795
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7575:
---

{quote}
>BTW, UUID.randomUUID isn't guaranteed to return a unique id. It's highly 
> improbable, but possible, although more likely due to older storages, user 
> copying a storage, etc. Although the storage ids are unique after the 
> "upgrade", if a disk is moved from one node to another, then a collision is 
> possible. Hence another reason why I feel explicitly checking for collisions 
> at startup should always be done.

UUIDs are designed to be globally unique with a high probability when generated 
by trusted processes. Even when the volume of generated UUIDs is very high, 
which is certainly not the case for storage IDs. The probability of a storageID 
collision in normal operation is vanishingly small.
https://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates
{quote}
We usually compare the probability of collision with hardware failure 
probability, or using the famous cosmic ray argument 
(http://stackoverflow.com/questions/2580933/cosmic-rays-what-is-the-probability-they-will-affect-a-program),
 since we can never do better than that.

{quote}
... Up until HDFS-4645, HDFS used randomly generated block IDs drawn from a far 
smaller space-- 2^64 – and we never had a problem. ...
{quote}
We did have collision check for random block IDs.

> NameNode not handling heartbeats properly after HDFS-2832
> -
>
> Key: HDFS-7575
> URL: https://issues.apache.org/jira/browse/HDFS-7575
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0, 2.5.0, 2.6.0
>Reporter: Lars Francke
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-7575.01.patch, HDFS-7575.02.patch, 
> HDFS-7575.03.binary.patch, HDFS-7575.03.patch, HDFS-7575.04.binary.patch, 
> HDFS-7575.04.patch, HDFS-7575.05.binary.patch, HDFS-7575.05.patch, 
> testUpgrade22via24GeneratesStorageIDs.tgz, 
> testUpgradeFrom22GeneratesStorageIDs.tgz, 
> testUpgradeFrom24PreservesStorageId.tgz
>
>
> Before HDFS-2832 each DataNode would have a unique storageId which included 
> its IP address. Since HDFS-2832 the DataNodes have a unique storageId per 
> storage directory which is just a random UUID.
> They send reports per storage directory in their heartbeats. This heartbeat 
> is processed on the NameNode in the 
> {{DatanodeDescriptor#updateHeartbeatState}} method. Pre HDFS-2832 this would 
> just store the information per Datanode. After the patch though each DataNode 
> can have multiple different storages so it's stored in a map keyed by the 
> storage Id.
> This works fine for all clusters that have been installed post HDFS-2832 as 
> they get a UUID for their storage Id. So a DN with 8 drives has a map with 8 
> different keys. On each Heartbeat the Map is searched and updated 
> ({{DatanodeStorageInfo storage = storageMap.get(s.getStorageID());}}):
> {code:title=DatanodeStorageInfo}
>   void updateState(StorageReport r) {
> capacity = r.getCapacity();
> dfsUsed = r.getDfsUsed();
> remaining = r.getRemaining();
> blockPoolUsed = r.getBlockPoolUsed();
>   }
> {code}
> On clusters that were upgraded from a pre HDFS-2832 version though the 
> storage Id has not been rewritten (at least not on the four clusters I 
> checked) so each directory will have the exact same storageId. That means 
> there'll be only a single entry in the {{storageMap}} and it'll be 
> overwritten by a random {{StorageReport}} from the DataNode. This can be seen 
> in the {{updateState}} method above. This just assigns the capacity from the 
> received report, instead it should probably sum it up per received heartbeat.
> The Balancer seems to be one of the only things that actually uses this 
> information so it now considers the utilization of a random drive per 
> DataNode for balancing purposes.
> Things get even worse when a drive has been added or replaced as this will 
> now get a new storage Id so there'll be two entries in the storageMap. As new 
> drives are usually empty it skewes the balancers decision in a way that this 
> node will never be considered over-utilized.
> Another problem is that old StorageReports are never removed from the 
> storageMap. So if I replace a drive and it gets a new storage Id the old one 
> will still be in place and used for all calculations by the Balancer until a 
> restart of the NameNode.
> I can try providing a patch that does the following:
> * Instead of using a Map I could just store the array we receive or instead 
> of storing an array sum up the values for reports with the same Id
> * On each heartbeat clear the map (so we know we have up to 

[jira] [Commented] (HDFS-7581) HDFS documentation needs updating post-shell rewrite

2015-01-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277790#comment-14277790
 ] 

Steve Loughran commented on HDFS-7581:
--

+1

> HDFS documentation needs updating post-shell rewrite
> 
>
> Key: HDFS-7581
> URL: https://issues.apache.org/jira/browse/HDFS-7581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HDFS-7581-01.patch, HDFS-7581-02.patch, HDFS-7581.patch
>
>
> After HADOOP-9902, some of the HDFS documentation is out of date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7612) TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir

2015-01-14 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7612:
-

 Summary: TestOfflineEditsViewer.testStored() uses incorrect 
default value for cacheDir
 Key: HDFS-7612
 URL: https://issues.apache.org/jira/browse/HDFS-7612
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko


{code}
final String cacheDir = System.getProperty("test.cache.data",
"build/test/cache");
{code}
results in
{{FileNotFoundException: build/test/cache/editsStoredParsed.xml (No such file 
or directory)}}
when {{test.cache.data}} is not set.
I can see this failing while running in Eclipse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5782) BlockListAsLongs should take lists of Replicas rather than concrete classes

2015-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277759#comment-14277759
 ] 

Hudson commented on HDFS-5782:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6862 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6862/])
HDFS-5782. Change BlockListAsLongs constructor to take Replica as parameter 
type instead of concrete classes Block and ReplicaInfo.  Contributed by David 
Powell and Joe Pallas (szetszwo: rev 6464a8929a3623e49155febf2f9817253f9a1de3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


> BlockListAsLongs should take lists of Replicas rather than concrete classes
> ---
>
> Key: HDFS-5782
> URL: https://issues.apache.org/jira/browse/HDFS-5782
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: David Powell
>Assignee: Joe Pallas
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-5782.patch, HDFS-5782.patch
>
>
> From HDFS-5194:
> {quote}
> BlockListAsLongs's constructor takes a list of Blocks and a list of 
> ReplicaInfos.  On the surface, the former is mildly irritating because it is 
> a concrete class, while the latter is a greater concern due to being a 
> File-based implementation of Replica.
> On deeper inspection, BlockListAsLongs passes members of both to an internal 
> method that accepts just Blocks, which conditionally casts them *back* to 
> ReplicaInfos (this cast only happens to the latter, though this isn't 
> immediately obvious to the reader).
> Conveniently, all methods called on these objects are found in the Replica 
> interface, and all functional (i.e. non-test) consumers of this interface 
> pass in Replica subclasses.  If this constructor took Lists of Replicas 
> instead, it would be more generally useful and its implementation would be 
> cleaner as well.
> {quote}
> Fixing this indeed makes the business end of BlockListAsLongs cleaner while 
> requiring no changes to FsDatasetImpl.  As suggested by the above 
> description, though, the HDFS tests use BlockListAsLongs differently from the 
> production code -- they pretty much universally provide a list of actual 
> Blocks.  To handle this:
> - In the case of SimulatedFSDataset, providing a list of Replicas is actually 
> less work.
> - In the case of NNThroughputBenchmark, rewriting to use Replicas is fairly 
> invasive.  Instead, the patch creates a second constructor in 
> BlockListOfLongs specifically for the use of NNThrougputBenchmark.  It turns 
> the stomach a little, but is clearer and requires less code than the 
> alternatives (and isn't without precedent).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-3443:
--
Status: Patch Available  (was: Open)

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277755#comment-14277755
 ] 

Hadoop QA commented on HDFS-7496:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692285/HDFS-7496.004.patch
  against trunk revision d336d13.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9209//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9209//console

This message is automatically generated.

> Fix FsVolume removal race conditions on the DataNode 
> -
>
> Key: HDFS-7496
> URL: https://issues.apache.org/jira/browse/HDFS-7496
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7496.000.patch, HDFS-7496.001.patch, 
> HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, 
> HDFS-7496.004.patch
>
>
> We discussed a few FsVolume removal race conditions on the DataNode in 
> HDFS-7489.  We should figure out a way to make removing an FsVolume safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5782) BlockListAsLongs should take lists of Replicas rather than concrete classes

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-5782:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, David and Joe!

> BlockListAsLongs should take lists of Replicas rather than concrete classes
> ---
>
> Key: HDFS-5782
> URL: https://issues.apache.org/jira/browse/HDFS-5782
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: David Powell
>Assignee: Joe Pallas
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-5782.patch, HDFS-5782.patch
>
>
> From HDFS-5194:
> {quote}
> BlockListAsLongs's constructor takes a list of Blocks and a list of 
> ReplicaInfos.  On the surface, the former is mildly irritating because it is 
> a concrete class, while the latter is a greater concern due to being a 
> File-based implementation of Replica.
> On deeper inspection, BlockListAsLongs passes members of both to an internal 
> method that accepts just Blocks, which conditionally casts them *back* to 
> ReplicaInfos (this cast only happens to the latter, though this isn't 
> immediately obvious to the reader).
> Conveniently, all methods called on these objects are found in the Replica 
> interface, and all functional (i.e. non-test) consumers of this interface 
> pass in Replica subclasses.  If this constructor took Lists of Replicas 
> instead, it would be more generally useful and its implementation would be 
> cleaner as well.
> {quote}
> Fixing this indeed makes the business end of BlockListAsLongs cleaner while 
> requiring no changes to FsDatasetImpl.  As suggested by the above 
> description, though, the HDFS tests use BlockListAsLongs differently from the 
> production code -- they pretty much universally provide a list of actual 
> Blocks.  To handle this:
> - In the case of SimulatedFSDataset, providing a list of Replicas is actually 
> less work.
> - In the case of NNThroughputBenchmark, rewriting to use Replicas is fairly 
> invasive.  Instead, the patch creates a second constructor in 
> BlockListOfLongs specifically for the use of NNThrougputBenchmark.  It turns 
> the stomach a little, but is clearer and requires less code than the 
> alternatives (and isn't without precedent).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3689) Add support for variable length block

2015-01-14 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-3689:

Attachment: HDFS-3689.005.patch

After an offline discussion with [~sanjay.radia] and [~szetszwo], the 005 patch 
still keeps the restriction that the source files and the target file should be 
in the same directory. In this way we do not need to update the quota.

> Add support for variable length block
> -
>
> Key: HDFS-3689
> URL: https://issues.apache.org/jira/browse/HDFS-3689
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
> HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
> HDFS-3689.004.patch, HDFS-3689.005.patch
>
>
> Currently HDFS supports fixed length blocks. Supporting variable length block 
> will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7575) NameNode not handling heartbeats properly after HDFS-2832

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277749#comment-14277749
 ] 

Colin Patrick McCabe commented on HDFS-7575:


bq. Suresh wrote: I agree with Daryn Sharp that there isno need to change the 
layout here. Layout change is only necessary if the two layouts are not 
compatible and the downgrade does not work from newer release to older. Is that 
the case here?

The new layout used in HDFS-6482 is backwards compatible, in the sense that 
older versions of hadoop can run with it.  HDFS-6482 just added the invariant 
that block ID uniquely determines which subdir a block is in, but subdirs 
already existed.  Does that mean we shouldn't have changed the layout version 
for HDFS-6482?  I think the answer is clear.

bq. Daryn wrote: Since we know duplicate storage ids are bad, I think the 
correct logic is to always sanity check the storage ids at startup. If there 
are collisions, then the storage should be updated. Rollback should not restore 
a bug by reverting the storage id to a dup.

I'm surprised to hear you say that rollback should not be an option.  It seems 
like the conservative thing to do here is to allow the user to restore to the 
VERSION file.  Obviously we believe there will be no problems.  But we always 
believe that, or else we wouldn't have made the change.  Sometimes there are 
problems.

bq. BTW, UUID.randomUUID isn't guaranteed to return a unique id. It's highly 
improbable, but possible, although more likely due to older storages, user 
copying a storage, etc.

This is really not a good argument.  Collisions in 128-bit space are extremely 
unlikely.  You will never see one in your lifetime.  Up until HDFS-4645, HDFS 
used randomly generated block IDs drawn from a far smaller space-- 2^64 -- and 
we never had a problem.  Phrases like "billions and billions" and "total number 
of grains of sand in the world" don't begin to approach the size of 2^128.

I think it's frustrating for storage IDs to change without warning just because 
HDFS was restarted.  It will make diagnosing problems by reading log files 
harder because storageIDs might morph at any time.  It also sets a bad 
precedent of not allowing downgrade and modifying VERSION files "on the fly" 
during startup.

> NameNode not handling heartbeats properly after HDFS-2832
> -
>
> Key: HDFS-7575
> URL: https://issues.apache.org/jira/browse/HDFS-7575
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0, 2.5.0, 2.6.0
>Reporter: Lars Francke
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-7575.01.patch, HDFS-7575.02.patch, 
> HDFS-7575.03.binary.patch, HDFS-7575.03.patch, HDFS-7575.04.binary.patch, 
> HDFS-7575.04.patch, HDFS-7575.05.binary.patch, HDFS-7575.05.patch, 
> testUpgrade22via24GeneratesStorageIDs.tgz, 
> testUpgradeFrom22GeneratesStorageIDs.tgz, 
> testUpgradeFrom24PreservesStorageId.tgz
>
>
> Before HDFS-2832 each DataNode would have a unique storageId which included 
> its IP address. Since HDFS-2832 the DataNodes have a unique storageId per 
> storage directory which is just a random UUID.
> They send reports per storage directory in their heartbeats. This heartbeat 
> is processed on the NameNode in the 
> {{DatanodeDescriptor#updateHeartbeatState}} method. Pre HDFS-2832 this would 
> just store the information per Datanode. After the patch though each DataNode 
> can have multiple different storages so it's stored in a map keyed by the 
> storage Id.
> This works fine for all clusters that have been installed post HDFS-2832 as 
> they get a UUID for their storage Id. So a DN with 8 drives has a map with 8 
> different keys. On each Heartbeat the Map is searched and updated 
> ({{DatanodeStorageInfo storage = storageMap.get(s.getStorageID());}}):
> {code:title=DatanodeStorageInfo}
>   void updateState(StorageReport r) {
> capacity = r.getCapacity();
> dfsUsed = r.getDfsUsed();
> remaining = r.getRemaining();
> blockPoolUsed = r.getBlockPoolUsed();
>   }
> {code}
> On clusters that were upgraded from a pre HDFS-2832 version though the 
> storage Id has not been rewritten (at least not on the four clusters I 
> checked) so each directory will have the exact same storageId. That means 
> there'll be only a single entry in the {{storageMap}} and it'll be 
> overwritten by a random {{StorageReport}} from the DataNode. This can be seen 
> in the {{updateState}} method above. This just assigns the capacity from the 
> received report, instead it should probably sum it up per received heartbeat.
> The Balancer seems to be one of the only things that actually uses this 
> information so it now considers the utilization of a random drive per 
> DataNode for balancing purposes.
> Things 

[jira] [Updated] (HDFS-7610) Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl

2015-01-14 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7610:

Status: Patch Available  (was: Open)

> Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl
> -
>
> Key: HDFS-7610
> URL: https://issues.apache.org/jira/browse/HDFS-7610
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7610.000.patch
>
>
> In the hot swap feature, {{FsDatasetImpl#addVolume}} uses the base volume dir 
> (e.g. "{{/foo/data0}}", instead of volume's current dir 
> "{{/foo/data/current}}" to construct {{FsVolumeImpl}}. As a result, DataNode 
> can not remove this newly added volume, because its 
> {{FsVolumeImpl#getBasePath}} returns "{{/foo}}".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7610) Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl

2015-01-14 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7610:

Attachment: HDFS-7610.000.patch

This patch changes:

1. Use {{StorageDirectory.getCurrentDir}} to construct {{FsVolumeImpl}} when 
adding the volume dynamically.
2. Changed removeVolume() to compare volumes using {{File#getCanonicalFile()}}. 
3. Added a test case to verify that a newly added volume can be located and 
removed.

> Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl
> -
>
> Key: HDFS-7610
> URL: https://issues.apache.org/jira/browse/HDFS-7610
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7610.000.patch
>
>
> In the hot swap feature, {{FsDatasetImpl#addVolume}} uses the base volume dir 
> (e.g. "{{/foo/data0}}", instead of volume's current dir 
> "{{/foo/data/current}}" to construct {{FsVolumeImpl}}. As a result, DataNode 
> can not remove this newly added volume, because its 
> {{FsVolumeImpl#getBasePath}} returns "{{/foo}}".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7611) TestFileTruncate.testTruncateEditLogLoad times out waiting for Mini HDFS Cluster to start

2015-01-14 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7611:
--
Attachment: testTruncateEditLogLoad.log

Attaching the log for the failed run as it is not easy to catch it.
I see only one suspicious thing: EOFException while sending heartbeat. But 
didn't look deeper than that.

> TestFileTruncate.testTruncateEditLogLoad times out waiting for Mini HDFS 
> Cluster to start
> -
>
> Key: HDFS-7611
> URL: https://issues.apache.org/jira/browse/HDFS-7611
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
> Attachments: testTruncateEditLogLoad.log
>
>
> I've seen it failing on Jenkins a couple of times. Somehow the cluster is not 
> comming ready after NN restart.
> Not sure if it is truncate specific, as I've seen same behaviour with other 
> tests that restart the NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7575) NameNode not handling heartbeats properly after HDFS-2832

2015-01-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7575:

Attachment: HDFS-7575.05.patch
HDFS-7575.05.binary.patch

Ok latest patch does not require a layout version upgrade.

bq. BTW, UUID.randomUUID isn't guaranteed to return a unique id. It's highly 
improbable, but possible, although more likely due to older storages, user 
copying a storage, etc. Although the storage ids are unique after the 
"upgrade", if a disk is moved from one node to another, then a collision is 
possible. Hence another reason why I feel explicitly checking for collisions at 
startup should always be done.
UUIDs are designed to be globally unique with a high probability when generated 
by trusted processes. Even when the volume of generated UUIDs is very high, 
which is certainly not the case for storage IDs. The probability of a storageID 
collision in normal operation is vanishingly small.
https://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates
If you are still concerned about UUID collision we can handle it in a separate 
Jira. This fix is specific to clusters previously upgraded from Hadoop 2.2 so 
they behave like clusters freshly installed from 2.4 or later.

> NameNode not handling heartbeats properly after HDFS-2832
> -
>
> Key: HDFS-7575
> URL: https://issues.apache.org/jira/browse/HDFS-7575
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0, 2.5.0, 2.6.0
>Reporter: Lars Francke
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-7575.01.patch, HDFS-7575.02.patch, 
> HDFS-7575.03.binary.patch, HDFS-7575.03.patch, HDFS-7575.04.binary.patch, 
> HDFS-7575.04.patch, HDFS-7575.05.binary.patch, HDFS-7575.05.patch, 
> testUpgrade22via24GeneratesStorageIDs.tgz, 
> testUpgradeFrom22GeneratesStorageIDs.tgz, 
> testUpgradeFrom24PreservesStorageId.tgz
>
>
> Before HDFS-2832 each DataNode would have a unique storageId which included 
> its IP address. Since HDFS-2832 the DataNodes have a unique storageId per 
> storage directory which is just a random UUID.
> They send reports per storage directory in their heartbeats. This heartbeat 
> is processed on the NameNode in the 
> {{DatanodeDescriptor#updateHeartbeatState}} method. Pre HDFS-2832 this would 
> just store the information per Datanode. After the patch though each DataNode 
> can have multiple different storages so it's stored in a map keyed by the 
> storage Id.
> This works fine for all clusters that have been installed post HDFS-2832 as 
> they get a UUID for their storage Id. So a DN with 8 drives has a map with 8 
> different keys. On each Heartbeat the Map is searched and updated 
> ({{DatanodeStorageInfo storage = storageMap.get(s.getStorageID());}}):
> {code:title=DatanodeStorageInfo}
>   void updateState(StorageReport r) {
> capacity = r.getCapacity();
> dfsUsed = r.getDfsUsed();
> remaining = r.getRemaining();
> blockPoolUsed = r.getBlockPoolUsed();
>   }
> {code}
> On clusters that were upgraded from a pre HDFS-2832 version though the 
> storage Id has not been rewritten (at least not on the four clusters I 
> checked) so each directory will have the exact same storageId. That means 
> there'll be only a single entry in the {{storageMap}} and it'll be 
> overwritten by a random {{StorageReport}} from the DataNode. This can be seen 
> in the {{updateState}} method above. This just assigns the capacity from the 
> received report, instead it should probably sum it up per received heartbeat.
> The Balancer seems to be one of the only things that actually uses this 
> information so it now considers the utilization of a random drive per 
> DataNode for balancing purposes.
> Things get even worse when a drive has been added or replaced as this will 
> now get a new storage Id so there'll be two entries in the storageMap. As new 
> drives are usually empty it skewes the balancers decision in a way that this 
> node will never be considered over-utilized.
> Another problem is that old StorageReports are never removed from the 
> storageMap. So if I replace a drive and it gets a new storage Id the old one 
> will still be in place and used for all calculations by the Balancer until a 
> restart of the NameNode.
> I can try providing a patch that does the following:
> * Instead of using a Map I could just store the array we receive or instead 
> of storing an array sum up the values for reports with the same Id
> * On each heartbeat clear the map (so we know we have up to date information)
> Does that sound sensible?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7611) TestFileTruncate.testTruncateEditLogLoad times out waiting for Mini HDFS Cluster to start

2015-01-14 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7611:
-

 Summary: TestFileTruncate.testTruncateEditLogLoad times out 
waiting for Mini HDFS Cluster to start
 Key: HDFS-7611
 URL: https://issues.apache.org/jira/browse/HDFS-7611
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko


I've seen it failing on Jenkins a couple of times. Somehow the cluster is not 
comming ready after NN restart.
Not sure if it is truncate specific, as I've seen same behaviour with other 
tests that restart the NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277705#comment-14277705
 ] 

Ted Yu commented on HDFS-7606:
--

I was looking at getBlocks() method. In particular, line 435:
{code}
snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
{code}

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277697#comment-14277697
 ] 

Chen He commented on HDFS-7606:
---

Do you mean the last element itself returned by diffs.getLast() is "null" ?

> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277696#comment-14277696
 ] 

Colin Patrick McCabe commented on HDFS-7067:


We should now be able to attach git diffs with binaries and have them work.  
Anyway, +1, I will commit shortly.

> ClassCastException while using a key created by keytool to create encryption 
> zone. 
> ---
>
> Key: HDFS-7067
> URL: https://issues.apache.org/jira/browse/HDFS-7067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Yi Yao
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
> hdfs7067.keystore
>
>
> I'm using transparent encryption. If I create a key for KMS keystore via 
> keytool and use the key to create an encryption zone. I get a 
> ClassCastException rather than an exception with decent error message. I know 
> we should use 'hadoop key create' to create a key. It's better to provide an 
> decent error message to remind user to use the right way to create a KMS key.
> [LOG]
> ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
> javax.crypto.spec.SecretKeySpec cannot be cast to 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277694#comment-14277694
 ] 

Colin Patrick McCabe commented on HDFS-7411:


bq. This is actually a feature, not a bug  Having our own datastructure lets us 
speed up decom by only checking blocks that are still insufficiently 
replicated. We prune out the sufficient ones each iteration. The memory 
overhead here should be pretty small since it's just an 8B reference per block, 
so with 1 million blocks this will be 8MB for a single node, or maybe 160MB for 
a full rack. Nodes are typically smaller than this this, so these are 
conservative estimates, and large decoms aren't that common.

That's a fair point.  It's too bad we can't use the existing list for this, but 
it's already being re-ordered in the block report processing code, for a 
different purpose.  I agree that it's fine as-is.

bq. On thinking about it I agree that just using a new config option is fine, 
but I'd prefer to define the DecomManager in terms of both an interval and an 
amount of work, rather than a rate. This is more powerful, and more in-line 
with the existing config. Are you okay with a new blocks.per.interval config?

Why is blocks.per.interval "more powerful" than blocks per minute?  It just 
seems annoying to have to do the mental math to figure out what to configure to 
get a certain blocks per minute going.  Also, the fact that "intervals" even 
exist is an implementation detail... you could easily imagine an 
event-triggered version that didn't do periodic polling.  I guess I don't feel 
strongly about this, but I'd like to understand the rationale more.

bq. I agree that it can lead to hangs. At a minimum, I'll add a "0 means no 
limit" config, and maybe we can set that by default. I think that NNs should 
really have enough heap headroom to handle the 10-100 of MBs of memory for 
this, it's peanuts compared to the 10s of GBs that are quite typical.

Makes sense.

> Refactor and improve decommissioning logic into DecommissionManager
> ---
>
> Key: HDFS-7411
> URL: https://issues.apache.org/jira/browse/HDFS-7411
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
> hdfs-7411.006.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7606) Missing null check in INodeFile#getBlocks()

2015-01-14 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277682#comment-14277682
 ] 

Chen He commented on HDFS-7606:
---

Hi [~te...@apache.org], please correct me if I am wrong. There is a condition 
check in the INodeFile.computeContentSummary() whether "diffs.getLast()" is 
null or not 

final int n = diffs.asList().size();

if (n > 0 && sf.isCurrentFileDeleted()) {
counts.add(Content.LENGTH, diffs.getLast().getFileSize());
 }



> Missing null check in INodeFile#getBlocks()
> ---
>
> Key: HDFS-7606
> URL: https://issues.apache.org/jira/browse/HDFS-7606
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> BlockInfo[] snapshotBlocks = diff == null ? getBlocks() : 
> diff.getBlocks();
> if(snapshotBlocks != null)
>   return snapshotBlocks;
> // Blocks are not in the current snapshot
> // Find next snapshot with blocks present or return current file blocks
> snapshotBlocks = getDiffs().findLaterSnapshotBlocks(diff.getSnapshotId());
> {code}
> If diff is null and snapshotBlocks is null, NullPointerException would result 
> from the call to diff.getSnapshotId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5782) BlockListAsLongs should take lists of Replicas rather than concrete classes

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-5782:
--
Hadoop Flags: Reviewed

+1 the new patch looks good.

> BlockListAsLongs should take lists of Replicas rather than concrete classes
> ---
>
> Key: HDFS-5782
> URL: https://issues.apache.org/jira/browse/HDFS-5782
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: David Powell
>Assignee: Joe Pallas
>Priority: Minor
> Attachments: HDFS-5782.patch, HDFS-5782.patch
>
>
> From HDFS-5194:
> {quote}
> BlockListAsLongs's constructor takes a list of Blocks and a list of 
> ReplicaInfos.  On the surface, the former is mildly irritating because it is 
> a concrete class, while the latter is a greater concern due to being a 
> File-based implementation of Replica.
> On deeper inspection, BlockListAsLongs passes members of both to an internal 
> method that accepts just Blocks, which conditionally casts them *back* to 
> ReplicaInfos (this cast only happens to the latter, though this isn't 
> immediately obvious to the reader).
> Conveniently, all methods called on these objects are found in the Replica 
> interface, and all functional (i.e. non-test) consumers of this interface 
> pass in Replica subclasses.  If this constructor took Lists of Replicas 
> instead, it would be more generally useful and its implementation would be 
> cleaner as well.
> {quote}
> Fixing this indeed makes the business end of BlockListAsLongs cleaner while 
> requiring no changes to FsDatasetImpl.  As suggested by the above 
> description, though, the HDFS tests use BlockListAsLongs differently from the 
> production code -- they pretty much universally provide a list of actual 
> Blocks.  To handle this:
> - In the case of SimulatedFSDataset, providing a list of Replicas is actually 
> less work.
> - In the case of NNThroughputBenchmark, rewriting to use Replicas is fairly 
> invasive.  Instead, the patch creates a second constructor in 
> BlockListOfLongs specifically for the use of NNThrougputBenchmark.  It turns 
> the stomach a little, but is clearer and requires less code than the 
> alternatives (and isn't without precedent).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2015-01-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277677#comment-14277677
 ] 

Andrew Wang commented on HDFS-7411:
---

Hi Colin, thanks for reviewing. I'll rework the patch after we settle on the 
details, few replies:

bq. Shouldn't we be using that here, rather than creating our own list in 
decomNodeBlocks?

This is actually a feature, not a bug :) Having our own datastructure lets us 
speed up decom by only checking blocks that are still insufficiently 
replicated. We prune out the sufficient ones each iteration. The memory 
overhead here should be pretty small since it's just an 8B reference per block, 
so with 1 million blocks this will be 8MB for a single node, or maybe 160MB for 
a full rack. Nodes are typically smaller than this this, so these are 
conservative estimates, and large decoms aren't that common.

The one thing I could see as a nice improvement is that we could skip the final 
full scan at the end of decom if we immediately propagate block map changes to 
decomNodeBlocks, but that seems like more trouble than it's worth.

bq. have a configuration key like dfs.namenode.decommission.blocks.per.minute 
that expresses directly what we want.

On thinking about it I agree that just using a new config option is fine, but 
I'd prefer to define the DecomManager in terms of both an interval and an 
amount of work, rather than a rate. This is more powerful, and more in-line 
with the existing config. Are you okay with a new {{blocks.per.interval}} 
config?

bq. dfs.namenode.decommission.max.concurrent.tracked.nodes

I agree that it can lead to hangs. At a minimum, I'll add a "0 means no limit" 
config, and maybe we can set that by default. I think that NNs should really 
have enough heap headroom to handle the 10-100 of MBs of memory for this, it's 
peanuts compared to the 10s of GBs that are quite typical.

> Refactor and improve decommissioning logic into DecommissionManager
> ---
>
> Key: HDFS-7411
> URL: https://issues.apache.org/jira/browse/HDFS-7411
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
> hdfs-7411.006.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7610) Should use StorageDirectory.getCurrentDIr() to construct FsVolumeImpl

2015-01-14 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-7610:
---

 Summary: Should use StorageDirectory.getCurrentDIr() to construct 
FsVolumeImpl
 Key: HDFS-7610
 URL: https://issues.apache.org/jira/browse/HDFS-7610
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


In the hot swap feature, {{FsDatasetImpl#addVolume}} uses the base volume dir 
(e.g. "{{/foo/data0}}", instead of volume's current dir "{{/foo/data/current}}" 
to construct {{FsVolumeImpl}}. As a result, DataNode can not remove this newly 
added volume, because its {{FsVolumeImpl#getBasePath}} returns "{{/foo}}".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7433) DatanodeManager#datanodeMap should be a HashMap, not a TreeMap, to optimize lookup performance

2015-01-14 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277639#comment-14277639
 ] 

Ming Ma commented on HDFS-7433:
---

Daryn, I just reread the patch. You are right, that is not an issue.

There is one minor case in which the call to check() could be wasted. Say 
"dfs.namenode.decommission.nodes.per.interval" is set to 1 and you decommission 
only one node. The first check() will call checkDecommissionState() and set the 
DN's scan number to the global scan number, then it exit. The second check() 
will skip checkDecommissionState() call given the DN's scan is the same as the 
global global scan number; The third check() will do the actual check again and 
so on.

> DatanodeManager#datanodeMap should be a HashMap, not a TreeMap, to optimize 
> lookup performance
> --
>
> Key: HDFS-7433
> URL: https://issues.apache.org/jira/browse/HDFS-7433
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7433.patch, HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7597) Clients seeking over webhdfs may crash the NN

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277623#comment-14277623
 ] 

Colin Patrick McCabe commented on HDFS-7597:


The cache builder code is only used once at startup, though, to build the cache 
object.  Being readable and developer-friendly is clearly the right thing to do 
for code that only runs once at startup.  If there are examples of 
inefficiencies in the code that will actually be used at runtime, that would be 
more interesting.

bq. A CHM is neither useful nor performant unless you intend to cache many 
multiples of the number of accessing threads. Probably on the order of 
thousands which is overkill.

Can you go into more detail about when the performance of a 
{{ConcurrentHashMap}} would be worse than a regular one?  The last time I 
looked at it, CHM was just using lock striping.  So basically each "get" or 
"put" takes a single lock, does its business, and then releases.  This seems 
like the same level of overhead as a normal hash map.  I don't think using 
multiple locks will be slower than one.  By definition, interlocked 
instructions bypass CPU caches... that's what they're designed to do and must 
do.

Like I said earlier, I am fine with this patch going in as-is (assuming the 
test failure is unrelated).  But I'd like to get more understanding of the 
performance issues here so we can optimize in the future.

> Clients seeking over webhdfs may crash the NN
> -
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2015-01-14 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277619#comment-14277619
 ] 

Charles Lamb commented on HDFS-7067:


[~cmccabe],

bq. Charles, is the TestKeyProviderFactory failure due to this patch?

Correct. test-patch.sh doesn't apply the hdfs7067.keystore file to 
hadoop-common/hadoop-common/src/test/resources and so the new test (which 
depends on it) will fail. The test passes when I apply the patch and the 
.keystore file in a fresh clone.


> ClassCastException while using a key created by keytool to create encryption 
> zone. 
> ---
>
> Key: HDFS-7067
> URL: https://issues.apache.org/jira/browse/HDFS-7067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Yi Yao
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
> hdfs7067.keystore
>
>
> I'm using transparent encryption. If I create a key for KMS keystore via 
> keytool and use the key to create an encryption zone. I get a 
> ClassCastException rather than an exception with decent error message. I know 
> we should use 'hadoop key create' to create a key. It's better to provide an 
> decent error message to remind user to use the right way to create a KMS key.
> [LOG]
> ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
> javax.crypto.spec.SecretKeySpec cannot be cast to 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277593#comment-14277593
 ] 

Colin Patrick McCabe commented on HDFS-7411:


DecomissionManager:

Let's document the locking here.  I believe all of these functions are designed 
to be run with the FSNamesystem write lock held, correct?

A question about the data structures here.  We already have a way of iterating 
through all the blocks on a given {{DataNode}} via the implicit linked lists in 
the {{BlockInfo}} objects.  Shouldn't we be using that here, rather than 
creating our own list in {{decomNodeBlocks}}?  This could be a lot of memory 
saved here, since datanodes might have anywhere between 200k and a million 
blocks in the next few years.  (I am fine with doing this in a follow-on, of 
course.)

{{DecomissionManager#startDecommission}}: let's have a debug log message here 
for the case where the node is already decommissioned... it might help with 
debugging.  Similarly in {{stopDecomission}}... if we're not doing anything, 
let's log why we're not doing anything at debug or trace level.

Config Keys:

We talked about this a bit earlier but didn't really come to any consensus.  I 
think we should just get rid of 
{{dfs.namenode.decommission.nodes.per.interval}} and 
{{dfs.namenode.decommission.blocks.per.node}}, and just have a configuration 
key like {{dfs.namenode.decommission.blocks.per.minute}} that expresses 
directly what we want.

If we set a reasonable default for 
{{dfs.namenode.decommission.blocks.per.minute}}, the impact on users will be 
minimal.  The old rate-limiting process was broken anyway... that's a big part 
of what this patch is designed to fix, as I understand.  So we shouldn't need 
to lug around these old config keys that don't really express what we're trying 
to configure.  Let's just add a new configuration key that is easy to 
understand, and maintain in the future.  Decom is a manual process anywhere 
where admins get involved

{{dfs.namenode.decommission.max.concurrent.tracked.nodes}}: I have mixed 
feelings about this configuration key.  It seems like the reason you want to 
add it is to limit NN memory consumption (but you can do that by not 
duplicating data structures-- see above).  However, it may have some value for 
users who would rather finish decomissioning 100 nodes in an hour, than have 
1000 nodes 10% decomissioned in that time.  So I guess it is a good addition, 
maybe?  The only problem is that if some of those first 100 nodes get stuck 
because there is an open-for-write file or something, then the decom process 
will start to slow down and perhaps eventually hang.

On a related note, I feel like we should have a follow-up change to make the 
decom parameters reconfigurable via the configuration reloading interface we 
added recently.  I will file a follow-on JIRA for that.

{code}
if (LOG.isDebugEnabled()) {
  StringBuilder b = new StringBuilder("Node {} ");
  if (isHealthy) {
b.append("is ");
  } else {
b.append("isn't ");
  }
  b.append("and still needs to replicate {} more blocks, " +
  "decommissioning is still in progress.");
{code}
Missing "healthy " in the printout.

Can we log this at info level like every half hour or something, so that people 
can see issues with nodes getting "stuck"?  As it is, it seems like they'll get 
no output unless they monkey with log4j manually.

{code}
   * Note also that the reference to the list of under-replicated blocks 
   * will be null on initial addx
{code}
Spelling?

{code}
  final Iterator>>
  
  it =
  new CyclicIteration>( 
  
  decomNodeBlocks, iterkey).iterator(); 
  
{code}
Well, this is creative, but I think I'd rather have the standard indentation :)

> Refactor and improve decommissioning logic into DecommissionManager
> ---
>
> Key: HDFS-7411
> URL: https://issues.apache.org/jira/browse/HDFS-7411
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
> hdfs-7411.006.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-14 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277547#comment-14277547
 ] 

Konstantin Boudnik commented on HDFS-3107:
--

I  think it is a reasonable expectation to do the merge in a few days or a 
week. Most importantly, the merge might require certain changes resulting from 
the conflicts resolution - so it'd be some dev. effort anyway.

> HDFS truncate
> -
>
> Key: HDFS-3107
> URL: https://issues.apache.org/jira/browse/HDFS-3107
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Lei Chang
>Assignee: Plamen Jeliazkov
> Fix For: 3.0.0
>
> Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
> HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
> HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
> HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the 
> underlying storage when a transaction is aborted. Currently HDFS does not 
> support truncate (a standard Posix operation) which is a reverse operation of 
> append, which makes upper layer applications use ugly workarounds (such as 
> keeping track of the discarded byte range per file in a separate metadata 
> store, and periodically running a vacuum process to rewrite compacted files) 
> to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277548#comment-14277548
 ] 

Jing Zhao commented on HDFS-3689:
-

Sure. Currently the patch includes the following changes:
# Add a new append2 API which always appends new data to a new block, and thus 
the previous last block becomes a block with variable length. Some unit tests 
are added to make sure the appended file can still be read/pread by the current 
DFSInputStream.
# Add support in DFSOutputStream to let clients specify when to allocate a new 
block. The patch simply adds a new SyncFlag named END_BLOCK. Clients can call 
hsync(END_BLOCK) to force the completion of the current block.
# Loose the restriction of concat. The current concat has the following 
restrictions:
- The src files and the target file must be in the same directory and cannot be 
empty
- The target file and all but the last src files cannot have partial block
- The src files and the target file must share the same replication factor and 
preferred block size
- The src files and the target file cannot be in any snapshot

The current patch makes the following changes which I think needs further 
discussion and confirmation:
- The src files and the target file do not need to be in the same directory
- The src files and the target file can have partial blocks
- The src/target files may have different preferred block size and replication 
factor, and after the concat the target file keeps its original setting
- The src files still cannot be included in any snapshot (see HDFS-4529 for 
details), but the target file can be in a snapshot

> Add support for variable length block
> -
>
> Key: HDFS-3689
> URL: https://issues.apache.org/jira/browse/HDFS-3689
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
> HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
> HDFS-3689.004.patch
>
>
> Currently HDFS supports fixed length blocks. Supporting variable length block 
> will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2219) Fsck should work with fully qualified file paths.

2015-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277545#comment-14277545
 ] 

Hudson commented on HDFS-2219:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6861 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6861/])
HDFS-2219. Change fsck to support fully qualified paths so that a particular 
namenode in a federated cluster with multiple namenodes can be specified in the 
path parameter. (szetszwo: rev 7fe0f25ad21f006eb41b832a181eb2a812a6f7b7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsckWithMultipleNameNodes.java


> Fsck should work with fully qualified file paths.
> -
>
> Key: HDFS-2219
> URL: https://issues.apache.org/jira/browse/HDFS-2219
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.0
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: h2219_20150113.patch
>
>
> Fsck takes absolute paths, but doesn't work with fully qualified file path 
> URIs. In a federated cluster with multiple namenodes, it will be useful to be 
> able to specify a file path for any namenode using its fully qualified path. 
> Currently, a non-default file system can be specified using -fs option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3818) Allow fsck to accept URIs as paths

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-3818:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Resolving this as a duplicate of HDFS-2219.

> Allow fsck to accept URIs as paths
> --
>
> Key: HDFS-3818
> URL: https://issues.apache.org/jira/browse/HDFS-3818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Stephen Chu
> Attachments: HDFS-3818.patch, HDFS-3818.patch
>
>
> Currently, fsck does not accept URIs as paths. 
> {noformat}
> [hdfs@cs-10-20-192-187 ~]# hdfs fsck 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/
> Connecting to namenode via http://cs-10-20-192-187.cloud.cloudera.com:50070
> FSCK started by hdfs (auth:KERBEROS_SSL) from /10.20.192.187 for path 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/ at Thu Aug 16 15:48:42 
> PDT 2012
> FSCK ended at Thu Aug 16 15:48:42 PDT 2012 in 1 milliseconds
> Invalid path name Invalid file name: 
> hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/
> Fsck on path 'hdfs://cs-10-20-192-187.cloud.cloudera.com:8020/user/' FAILED
> {noformat}
> It'd be useful for fsck to accept URIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2219) Fsck should work with fully qualified file paths.

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-2219:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Jing for reviewing the patch.

I have committed this.

> Fsck should work with fully qualified file paths.
> -
>
> Key: HDFS-2219
> URL: https://issues.apache.org/jira/browse/HDFS-2219
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.0
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: h2219_20150113.patch
>
>
> Fsck takes absolute paths, but doesn't work with fully qualified file path 
> URIs. In a federated cluster with multiple namenodes, it will be useful to be 
> able to specify a file path for any namenode using its fully qualified path. 
> Currently, a non-default file system can be specified using -fs option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277534#comment-14277534
 ] 

Colin Patrick McCabe commented on HDFS-3107:


I can't speak for anyone else on this thread, but I personally think we should 
give it a week or two and then merge to branch-2.  I hope anyone with 
unresolved concerns will post follow-ups by then.

In the future we should probably do this kind of work in a branch.  It would 
have sped up the work, because we could have committed things quickly to the 
branch.  git has made branches easier than ever.  The requirement to get three 
+1s in a branch might seem onerous, but a feature of this size needs several 
eyes on it anyway, so the review requirement ended up being about the same (or 
even more).

> HDFS truncate
> -
>
> Key: HDFS-3107
> URL: https://issues.apache.org/jira/browse/HDFS-3107
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Lei Chang
>Assignee: Plamen Jeliazkov
> Fix For: 3.0.0
>
> Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
> HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
> HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
> HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the 
> underlying storage when a transaction is aborted. Currently HDFS does not 
> support truncate (a standard Posix operation) which is a reverse operation of 
> append, which makes upper layer applications use ugly workarounds (such as 
> keeping track of the discarded byte range per file in a separate metadata 
> store, and periodically running a vacuum process to rewrite compacted files) 
> to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7585) Get TestEnhancedByteBufferAccess working on CPU architectures with page sizes other than 4096

2015-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277506#comment-14277506
 ] 

Hudson commented on HDFS-7585:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6860 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6860/])
HDFS-7585. Get TestEnhancedByteBufferAccess working on CPU architectures with 
page sizes other than 4096 (Sam Liu via Colin P. McCabe) (cmccabe: rev 
446545c496fdab75e76c8124c98324e37150b5dc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java


> Get TestEnhancedByteBufferAccess working on CPU architectures with page sizes 
> other than 4096
> -
>
> Key: HDFS-7585
> URL: https://issues.apache.org/jira/browse/HDFS-7585
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.6.0
>Reporter: sam liu
>Assignee: sam liu
> Fix For: 2.7.0
>
> Attachments: HDFS-7585.001.patch, HDFS-7585.002.patch
>
>
> The test TestEnhancedByteBufferAccess hard code the block size, and it fails 
> with exceptions on power linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) startup used too much time to load edits

2015-01-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277507#comment-14277507
 ] 

Daryn Sharp commented on HDFS-7609:
---

We've also noticed a huge performance degradation (10X+) in 2.x edit 
processing.  The retry cache is large part of it.

The retry cache isn't "useless" during startup since the concept is a returning 
client can later receive the response to an operation that completed, but the 
client didn't receive the answer.  Transient network issue, restart, failover, 
etc.  While processing edits, whether startup or standby, the NN needs to 
maintain the cache.  The retry cache is useful but it needs to be optimized.

> startup used too much time to load edits
> 
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2219) Fsck should work with fully qualified file paths.

2015-01-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277495#comment-14277495
 ] 

Jing Zhao commented on HDFS-2219:
-

Thanks for working on this, Nicholas! The patch looks good to me. +1

> Fsck should work with fully qualified file paths.
> -
>
> Key: HDFS-2219
> URL: https://issues.apache.org/jira/browse/HDFS-2219
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.0
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h2219_20150113.patch
>
>
> Fsck takes absolute paths, but doesn't work with fully qualified file path 
> URIs. In a federated cluster with multiple namenodes, it will be useful to be 
> able to specify a file path for any namenode using its fully qualified path. 
> Currently, a non-default file system can be specified using -fs option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7189) Add trace spans for DFSClient metadata operations

2015-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7189:
---
Attachment: HDFS-7189.005.patch

> Add trace spans for DFSClient metadata operations
> -
>
> Key: HDFS-7189
> URL: https://issues.apache.org/jira/browse/HDFS-7189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7189.001.patch, HDFS-7189.003.patch, 
> HDFS-7189.004.patch, HDFS-7189.005.patch
>
>
> We should add trace spans for DFSClient metadata operations.  For example, 
> {{DFSClient#rename}} should have a trace span, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277490#comment-14277490
 ] 

Daryn Sharp commented on HDFS-3689:
---

To simplify review, would you please provide a bullet list summary of exactly 
what features the latest patch includes as well as any incompatibilities?

> Add support for variable length block
> -
>
> Key: HDFS-3689
> URL: https://issues.apache.org/jira/browse/HDFS-3689
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
> HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
> HDFS-3689.004.patch
>
>
> Currently HDFS supports fixed length blocks. Supporting variable length block 
> will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7585) Get TestEnhancedByteBufferAccess working on CPU architectures with page sizes other than 4096

2015-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7585:
---
  Resolution: Fixed
   Fix Version/s: 2.7.0
Target Version/s: 2.7.0
  Status: Resolved  (was: Patch Available)

> Get TestEnhancedByteBufferAccess working on CPU architectures with page sizes 
> other than 4096
> -
>
> Key: HDFS-7585
> URL: https://issues.apache.org/jira/browse/HDFS-7585
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.6.0
>Reporter: sam liu
>Assignee: sam liu
> Fix For: 2.7.0
>
> Attachments: HDFS-7585.001.patch, HDFS-7585.002.patch
>
>
> The test TestEnhancedByteBufferAccess hard code the block size, and it fails 
> with exceptions on power linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7585) Get TestEnhancedByteBufferAccess working on CPU architectures with page sizes other than 4096

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277484#comment-14277484
 ] 

Colin Patrick McCabe commented on HDFS-7585:


+1.  Thanks, Sam.

> Get TestEnhancedByteBufferAccess working on CPU architectures with page sizes 
> other than 4096
> -
>
> Key: HDFS-7585
> URL: https://issues.apache.org/jira/browse/HDFS-7585
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.6.0
>Reporter: sam liu
>Assignee: sam liu
> Attachments: HDFS-7585.001.patch, HDFS-7585.002.patch
>
>
> The test TestEnhancedByteBufferAccess hard code the block size, and it fails 
> with exceptions on power linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7585) Get TestEnhancedByteBufferAccess working on CPU architectures with page sizes other than 4096

2015-01-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7585:
---
Summary: Get TestEnhancedByteBufferAccess working on CPU architectures with 
page sizes other than 4096  (was: TestEnhancedByteBufferAccess hard code the 
block size)

> Get TestEnhancedByteBufferAccess working on CPU architectures with page sizes 
> other than 4096
> -
>
> Key: HDFS-7585
> URL: https://issues.apache.org/jira/browse/HDFS-7585
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.6.0
>Reporter: sam liu
>Assignee: sam liu
> Attachments: HDFS-7585.001.patch, HDFS-7585.002.patch
>
>
> The test TestEnhancedByteBufferAccess hard code the block size, and it fails 
> with exceptions on power linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2015-01-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277442#comment-14277442
 ] 

Colin Patrick McCabe commented on HDFS-7067:


Charles, is the TestKeyProviderFactory failure due to this patch?

> ClassCastException while using a key created by keytool to create encryption 
> zone. 
> ---
>
> Key: HDFS-7067
> URL: https://issues.apache.org/jira/browse/HDFS-7067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Yi Yao
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
> hdfs7067.keystore
>
>
> I'm using transparent encryption. If I create a key for KMS keystore via 
> keytool and use the key to create an encryption zone. I get a 
> ClassCastException rather than an exception with decent error message. I know 
> we should use 'hadoop key create' to create a key. It's better to provide an 
> decent error message to remind user to use the right way to create a KMS key.
> [LOG]
> ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
> javax.crypto.spec.SecretKeySpec cannot be cast to 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2219) Fsck should work with fully qualified file paths.

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277440#comment-14277440
 ] 

Tsz Wo Nicholas Sze commented on HDFS-2219:
---

The timeout of TestDatanodeManager is unrelated since the patch does not change 
anything related to it.

> Fsck should work with fully qualified file paths.
> -
>
> Key: HDFS-2219
> URL: https://issues.apache.org/jira/browse/HDFS-2219
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.23.0
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h2219_20150113.patch
>
>
> Fsck takes absolute paths, but doesn't work with fully qualified file path 
> URIs. In a federated cluster with multiple namenodes, it will be useful to be 
> able to specify a file path for any namenode using its fully qualified path. 
> Currently, a non-default file system can be specified using -fs option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3689) Add support for variable length block

2015-01-14 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-3689:

Attachment: HDFS-3689.004.patch

> Add support for variable length block
> -
>
> Key: HDFS-3689
> URL: https://issues.apache.org/jira/browse/HDFS-3689
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
> HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
> HDFS-3689.004.patch
>
>
> Currently HDFS supports fixed length blocks. Supporting variable length block 
> will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7457) DatanodeID generates excessive garbage

2015-01-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277432#comment-14277432
 ] 

Daryn Sharp commented on HDFS-7457:
---

It's not the int that is subject to GC, but the dynamic construction of the 
xferAddr string per the description.  This caches the string and recomputes the 
hash if/when it changes.

> DatanodeID generates excessive garbage
> --
>
> Key: HDFS-7457
> URL: https://issues.apache.org/jira/browse/HDFS-7457
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7457.patch
>
>
> {{DatanodeID#getXferAddr}} is a dynamically generated string.  This string is 
> repeatedly generated for the hash code, equality, comparisons, and 
> stringification.  Every DN->NN RPC method calls 
> {{DatanodeManager#getDatanode}} to validate if the node is registered, which 
> involves a call to {{getXferAddr}}.
> The dynamic computation generates unnecessary trash that puts unnecessary 
> pressure on the GC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7433) DatanodeManager#datanodeMap should be a HashMap, not a TreeMap, to optimize lookup performance

2015-01-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277426#comment-14277426
 ] 

Daryn Sharp commented on HDFS-7433:
---

[~mingma], I'm not sure I understand your scenario.  When the node starts the 
decomm, the monitor will notice that the scan number is _less_ than current 
scan number, scan the node, set the scan number to current.  Now when the next 
cycle starts, it will skip that node because the scan number is the same as 
current.  It's not until the monitor hits the end of the list that it bumps the 
scan number which will trigger the rescan of the nodes because they have a 
lower scan number.

How would the continuous re-scan case occur that you describe?

> DatanodeManager#datanodeMap should be a HashMap, not a TreeMap, to optimize 
> lookup performance
> --
>
> Key: HDFS-7433
> URL: https://issues.apache.org/jira/browse/HDFS-7433
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7433.patch, HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >