[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-08-21 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707815#comment-14707815
 ] 

Ajith S commented on HDFS-4167:
---

Hi [~jingzhao]

Can i please work on this patch in case you are not.??

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
 HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8940) Support for large-scale multi-tenant inotify service

2015-08-21 Thread Ming Ma (JIRA)
Ming Ma created HDFS-8940:
-

 Summary: Support for large-scale multi-tenant inotify service
 Key: HDFS-8940
 URL: https://issues.apache.org/jira/browse/HDFS-8940
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma


HDFS-6634 provides the core inotify functionality. We would like to extend that 
to provide a large-scale service that ten of thousands of clients can subscribe 
to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8940) Support for large-scale multi-tenant inotify service

2015-08-21 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-8940:
--
Attachment: Large-Scale-Multi-Tenant-Inotify-Service.pdf

Here is the draft document that outlines the issues we are trying to solve, the 
assumptions and the design. Appreciate any input others might have especially 
on the design choice.

 Support for large-scale multi-tenant inotify service
 

 Key: HDFS-8940
 URL: https://issues.apache.org/jira/browse/HDFS-8940
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
 Attachments: Large-Scale-Multi-Tenant-Inotify-Service.pdf


 HDFS-6634 provides the core inotify functionality. We would like to extend 
 that to provide a large-scale service that ten of thousands of clients can 
 subscribe to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-08-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707824#comment-14707824
 ] 

Jing Zhao commented on HDFS-4167:
-

Sure, please feel free to assign the jira to yourself.

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
 HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707809#comment-14707809
 ] 

Hudson commented on HDFS-8924:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1026 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1026/])
HDFS-8924. Add pluggable interface for reading replicas in DFSClient. (Colin 
Patrick McCabe via Lei Xu) (lei: rev 7087e700e032dabc174ecc12b62c12e7d49b995f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessorBuilder.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.8.0

 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707810#comment-14707810
 ] 

Hudson commented on HDFS-8924:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #285 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/285/])
HDFS-8924. Add pluggable interface for reading replicas in DFSClient. (Colin 
Patrick McCabe via Lei Xu) (lei: rev 7087e700e032dabc174ecc12b62c12e7d49b995f)
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessor.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessorBuilder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExternalBlockReader.java


 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.8.0

 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6939) Support path-based filtering of inotify events

2015-08-21 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707828#comment-14707828
 ] 

Ming Ma commented on HDFS-6939:
---

Thanks [~surendrasingh] for working on this.

I had some discussion with [~cmccabe], [~eddyxu], [~zhz] couple weeks about 
inotify functionality and how to make it useful for large-scale multi-tenant 
scenarios. I just uploaded the draft design document in HDFS-8940. Appreciate 
if you have any input. For this specific work item, we might want to 
investigate it together with other issues and understand how it can eventually 
enable more applications.

 Support path-based filtering of inotify events
 --

 Key: HDFS-6939
 URL: https://issues.apache.org/jira/browse/HDFS-6939
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode, qjm
Reporter: James Thomas
Assignee: Surendra Singh Lilhore
 Attachments: HDFS-6939-001.patch


 Users should be able to specify that they only want events involving 
 particular paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-08-21 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S reassigned HDFS-4167:
-

Assignee: Ajith S  (was: Jing Zhao)

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Ajith S
 Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
 HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707844#comment-14707844
 ] 

Hudson commented on HDFS-8924:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2223 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2223/])
HDFS-8924. Add pluggable interface for reading replicas in DFSClient. (Colin 
Patrick McCabe via Lei Xu) (lei: rev 7087e700e032dabc174ecc12b62c12e7d49b995f)
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessorBuilder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java


 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.8.0

 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8938) Refactor BlockManager in blockmanagement

2015-08-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8938:
-
Status: Patch Available  (was: Open)

 Refactor BlockManager in blockmanagement
 

 Key: HDFS-8938
 URL: https://issues.apache.org/jira/browse/HDFS-8938
 Project: Hadoop HDFS
  Issue Type: Task
  Components: build
Reporter: Mingliang Liu
Assignee: Mingliang Liu
 Attachments: HDFS-8938.000.patch


 This lira tracks the effort of refactoring the {{BlockManager}} in 
 {{hdfs.server.blockmanagement}} package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8935) Erasure Coding: createErasureCodingZone api should accept the policyname as argument instead of ErasureCodingPolicy

2015-08-21 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706547#comment-14706547
 ] 

Vinayakumar B commented on HDFS-8935:
-

I think this should be implemented only after deciding, how user can use custom 
policies. 
If the user wants to use custom policy which is not configured, then can create 
one and pass as argument in current API.

If need to retain this, then one more overloaded API can be added which can 
accept just name of policy available.

 Erasure Coding: createErasureCodingZone api should accept the policyname as 
 argument instead of ErasureCodingPolicy
 ---

 Key: HDFS-8935
 URL: https://issues.apache.org/jira/browse/HDFS-8935
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: J.Andreina
Assignee: J.Andreina

 Current behavior : User has to specify ErasureCodingPolicy as an argument for 
 createErasureCodingZone api .
 This can be made in sync with creation of EC zone through CLI , where user 
 need to specify only the policy name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8937) Fix the exception when set replication to Erasure Coding files

2015-08-21 Thread GAO Rui (JIRA)
GAO Rui created HDFS-8937:
-

 Summary: Fix the exception when set replication to Erasure Coding 
files
 Key: HDFS-8937
 URL: https://issues.apache.org/jira/browse/HDFS-8937
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: GAO Rui
Assignee: GAO Rui


Setting replication to an EC file caused exception.  We should simply ignore 
the request, just like what we're currently doing for a setReplication request 
against a directory. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8373) Ec files can't be deleted into Trash because of that Trash isn't EC zone.

2015-08-21 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706551#comment-14706551
 ] 

Brahma Reddy Battula commented on HDFS-8373:


I saw HDFS-8833 is in discussion about whether to eliminate EC zones. If so, 
ErasureCodingPolicy exists at file level, then the ec files can be moved to 
Trash just like normal files.

 Ec files can't be deleted into Trash because of that Trash isn't EC zone.
 -

 Key: HDFS-8373
 URL: https://issues.apache.org/jira/browse/HDFS-8373
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: GAO Rui
Assignee: Brahma Reddy Battula
  Labels: EC

 When EC files were deleted, they would be moved into {{Trash}} directory. 
 But, EC files can only be placed under EC zone. So, EC files which have been 
 deleted can not be moved to {{Trash}} directory.
 Problem could be solved by creating a EC zone(floder) inside {{Trash}} to 
 contain deleted EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8936) Simplify Erasure Coding Zone DiskSpace quota exceeded exception error message

2015-08-21 Thread GAO Rui (JIRA)
GAO Rui created HDFS-8936:
-

 Summary: Simplify Erasure Coding Zone DiskSpace quota exceeded 
exception error message
 Key: HDFS-8936
 URL: https://issues.apache.org/jira/browse/HDFS-8936
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: GAO Rui
Assignee: GAO Rui


When a EC directory exceed DiskSpace quota, the error message is along with 
DFSStripedOutputStream inner exception message. Error messages should be as 
simple and clear as normal hdfs directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8853) Erasure Coding: Provide ECSchema validation when creating ECZone

2015-08-21 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706542#comment-14706542
 ] 

Vinayakumar B commented on HDFS-8853:
-

Thanks for the update [~andreina].

Few more nits.

1. {{+boolean validPolicy = false;}}, can move this inside else block 
itself.

2. {code}+throw new IllegalArgumentException(Policy [  + 
ecPolicy.getName()
++  ] does not match any of the supported policies.  +
+Please select any one of  + ecPolicyNames);
+  }{code}
Just replace this as below 
{code}+throw new HadoopIllegalArgumentException(Policy ' + 
ecPolicy.getName()
++ ' does not match any of the supported policies.  +
+Please select any one of  + ecPolicyNames);
+  }{code}
Just to use, {{HadoopIllegalArgumentException}} and making message consistent 
with command line, Policy {color:red}[{color}  +  ecPolicy.getName() +  
{color:red}]{color}  to Policy {color:green}'{color} +  ecPolicy.getName() + 
{color:green}'{color} .

 Erasure Coding: Provide ECSchema validation when creating ECZone
 

 Key: HDFS-8853
 URL: https://issues.apache.org/jira/browse/HDFS-8853
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: J.Andreina
 Attachments: HDFS-8853-HDFS-7285-01.patch, 
 HDFS-8853-HDFS-7285-merge-02.patch, HDFS-8853-HDFS-7285-merge-03.patch, 
 HDFS-8853-HDFS-7285-merge-04.patch


 Presently the {{DFS#createErasureCodingZone(path, ecSchema, cellSize)}} 
 doesn't have any validation that the given {{ecSchema}} is available in 
 {{ErasureCodingSchemaManager#activeSchemas}} list. Now, if it doesn't exists 
 then will create the ECZone with {{null}} schema. IMHO we could improve this 
 by doing necessary basic sanity checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8936) Simplify Erasure Coding Zone DiskSpace quota exceeded exception error message

2015-08-21 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-8936:
--
Attachment: None EC(Replication) space quota.log
EC space quota.log

 Simplify Erasure Coding Zone DiskSpace quota exceeded exception error message
 -

 Key: HDFS-8936
 URL: https://issues.apache.org/jira/browse/HDFS-8936
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: GAO Rui
Assignee: GAO Rui
 Attachments: EC space quota.log, None EC(Replication) space quota.log


 When a EC directory exceed DiskSpace quota, the error message is along with 
 DFSStripedOutputStream inner exception message. Error messages should be as 
 simple and clear as normal hdfs directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8900) Optimize XAttr memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8900:
-
Attachment: HDFS-8900.002.patch

 Optimize XAttr memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8900) Optimize XAttr memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706763#comment-14706763
 ] 

Yi Liu edited comment on HDFS-8900 at 8/21/15 2:45 PM:
---

Thanks [~andrew.wang], I update the patch for your comments.

*About hard limit.*
{quote}
I talked with Colin Patrick McCabe about compatibility offline 
{quote}
Yes, I agree you and Colin. In fact, when I added the hard limit, the first 
thing I considered is the compatibility. I did it based on two reasons: 1). as 
you said too, it's unlikely anyone's ever changed the max size. 2). The max 
limit is to restrict user's (user/trusted) namespace xattrs, and it doesn't 
break the existing HDFS features. 
I felt it was OK to do this modification on trunk and branch-2 directly 
(Certainly we can still use 4 bytes, but I wanted to save 2 bytes from it :)). 
I see you agree with the hard limit generally, one thing is do we need to only 
have the hard limit in trunk? How about do it in the branch-2 if we think it's 
fine?

{quote}
Could we exit on setting an xattr size bigger than the hard limit, rather than 
doing a silent min? We should mention this new hard limit somewhere as well.
{quote}
Sure, updated in the new patch. User will see the hard limit from error msg. I 
updated the description of the xattr max size in hdfs-default.xml and mentioned 
the hard limit too.

{quote}
Any comment on the max size supported by other filesystems like ext4 or btrfs, 
for reference as to what's reasonable here?
{quote}
It's a long time since I read xattr of ext4 and other fs last time. I remember 
some fs have limit about the xattr length, and some fs don't have. It's detail, 
small difference should be OK.

{quote}
Typo in FSDirectory: should - should be. Also the line below it says 
unlimited but that'll never get triggered now.
{quote}
sure, I forgot to remove the unlimited...

{quote}
Regarding configuration, we could simplify by just using the hard limit. Admins 
would still have the option of disabling xattrs entirely; is there really any 
value in being able to set something smaller than 32KB? This would definitely 
make it a trunk change.
{quote}
Agree, How about doing it in a follow-on if we want later?

*More review comments:* 
{quote}
FSDirectory#addToInodeMap, do we need that new return?
{quote}
right, no need. I removed it. Maybe I added it suddenly :)

{quote}
A bit out of scope and so optional, but I think everywhere we say prefixName 
we really want to say prefixedName because prefixName sounds more like the 
name of the prefix rather than a name with a prefix.
{quote}
Good idea, {{prefixedName}} is much more better, I updated all those in the new 
patch.

{quote}
Some unrelated import changes in FSNamesystemLock and INodeAttributeProvider
{quote}
OK, I reverted this modification since they are unrelated (I saw them, I was 
thinking to remove them). 

{quote}
XAttrFeature has an extra import
{quote}
fixed

{quote}
What's the reason for switching from ImmutableList to List in some places? The 
switch is also not complete, since I still see some usages of ImmutableList. I 
remember we liked ImmutableList originally since it made the need to set very 
explicit.
{quote}
I think originally we use {{ImmutableList}} is mainly because it's immutable, 
and we keep it in {{XAttrFeature}}, we don't want outside modification affects 
it. Now it's packed to {{byte[]}} in {{XAttrFeature}}, so no need immutable 
list. 
I use ArrayList instead of ImmutableList is because when building 
ImmutableList, it needs additional list copy (from an internal ArrayList). Then 
the performance is a bit better.
I missed to switch one {{ImmutableList}} in XAttrStorage and fixed it now.

{quote}
Mind adding Javadoc for SerialNumberMap, and an interface audience private 
annotation?
XAttrsFormat's class javadoc goes over 80 chars, could use interface audience 
private also.
{quote}
Sure, updated them.  One thing is {{XAttrFormat}} is {{package-private}}, so no 
need to add audience private annotation for it.

{quote}
Can we add an IllegalStateException to SerialNumberMap#get(T) for Integer 
overflow? Also there's the case that the int from the map doesn't fit in the 29 
bits in XAttrFormat; check that in XAttrsFormattoBytes?
{quote}
Sure, I also add the check in XAttrFormat#toBytes.   Actually I want to create 
a follow-on to restrict the total number of xattr names for user's (user/trust) 
xattrs. For HDFS kernel, the number of xattr names are less than 10 currently, 
but if users create many various xattrs (maybe by mistake), then it will cause 
unexpected behavior. 

{quote}
Consider also renaming XAttrsFormat to XAttrFormat, so it's named like 
XAttrStorage
{quote}
Good idea.

{quote}
Is it worthwhile to do the same dictionary encoding for the FSImage as well? If 
the # xattrs is large enough to affect memory footprint, it'd also affect 
loading times 

[jira] [Comment Edited] (HDFS-8900) Optimize XAttr memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706763#comment-14706763
 ] 

Yi Liu edited comment on HDFS-8900 at 8/21/15 2:08 PM:
---

Thanks [~andrew.wang], I update the patch for your comments.

*About hard limit.*
{quote}
I talked with Colin Patrick McCabe about compatibility offline 
{quote}
Yes, I agree you and Colin. In fact, when I added the hard limit, the first 
thing I considered is the compatibility. I did it based on two reasons: 1). as 
you said too, it's unlikely anyone's ever changed the max size. 2). The max 
limit is to restrict user's (user/trusted) namespace xattrs, and it doesn't 
break the existing HDFS features. 
I felt it was OK to do this modification on trunk and branch-2 directly 
(Certainly we can still use 4 bytes, but I wanted to save 2 bytes from it :)). 
I see you agree with the hard limit generally, one thing is do we need to only 
have the hard limit in trunk? How about do it in the branch-2 if we think it's 
fine?

{quote}
Could we exit on setting an xattr size bigger than the hard limit, rather than 
doing a silent min? We should mention this new hard limit somewhere as well.
{quote}
Sure, updated in the new patch. User will see the hard limit from error msg. I 
updated the description of the xattr max size in hdfs-default.xml and mentioned 
the hard limit too.

{quote}
Any comment on the max size supported by other filesystems like ext4 or btrfs, 
for reference as to what's reasonable here?
{quote}
It's a long time since I read xattr of ext4 and other fs last time. I remember 
some fs have limit about the xattr length, and some fs don't have. It's detail, 
small difference should be OK.

{quote}
Typo in FSDirectory: should - should be. Also the line below it says 
unlimited but that'll never get triggered now.
{quote}
sure, I forgot to remove the unlimited...

{quote}
Regarding configuration, we could simplify by just using the hard limit. Admins 
would still have the option of disabling xattrs entirely; is there really any 
value in being able to set something smaller than 32KB? This would definitely 
make it a trunk change.
{quote}
Agree, How about doing it in a follow-on if we want later?

*More review comments:* 
{quote}
FSDirectory#addToInodeMap, do we need that new return?
{quote}
right, no need. I removed it. Maybe I added it suddenly :)

{quote}
A bit out of scope and so optional, but I think everywhere we say prefixName 
we really want to say prefixedName because prefixName sounds more like the 
name of the prefix rather than a name with a prefix.
{quote}
Good idea, {{prefixedName}} is much more better, I updated all those in the new 
patch.

{quote}
Some unrelated import changes in FSNamesystemLock and INodeAttributeProvider
{quote}
OK, I reverted this modification since they are unrelated (I saw them, I was 
thinking to remove them). 

{quote}
XAttrFeature has an extra import
{quote}
fixed

{quote}
What's the reason for switching from ImmutableList to List in some places? The 
switch is also not complete, since I still see some usages of ImmutableList. I 
remember we liked ImmutableList originally since it made the need to set very 
explicit.
{quote}
I think originally we use {{ImmutableList}} is mainly because it's immutable, 
and we keep it in {{XAttrFeature}}, we don't want outside modification affects 
it. Now it's packed to {{byte[]}} in {{XAttrFeature}}, so no need immutable 
list. 
I use ArrayList instead of ImmutableList is because when building 
ImmutableList, it needs additional list copy (from an internal ArrayList). Then 
the performance is a bit better.
I missed one place in XAttrStorage and fixed it now.

{quote}
Mind adding Javadoc for SerialNumberMap, and an interface audience private 
annotation?
XAttrsFormat's class javadoc goes over 80 chars, could use interface audience 
private also.
{quote}
Sure, updated them.  One thing is {{XAttrFormat}} is {{package-private}}, so no 
need to add audience private annotation for it.

{quote}
Can we add an IllegalStateException to SerialNumberMap#get(T) for Integer 
overflow? Also there's the case that the int from the map doesn't fit in the 29 
bits in XAttrFormat; check that in XAttrsFormattoBytes?
{quote}
Sure, I also add the check in XAttrFormat#toBytes.   Actually I want to create 
a follow-on to restrict the total number of xattr names for user's (user/trust) 
xattrs. For HDFS kernel, the number of xattr names are less than 10 currently, 
but if users create many various xattrs (maybe by mistake), then it will cause 
unexpected behavior. 

{quote}
Consider also renaming XAttrsFormat to XAttrFormat, so it's named like 
XAttrStorage
{quote}
Good idea.

{quote}
Is it worthwhile to do the same dictionary encoding for the FSImage as well? If 
the # xattrs is large enough to affect memory footprint, it'd also affect 
loading times which can already be 

[jira] [Updated] (HDFS-8900) Optimize XAttr memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8900:
-
Attachment: HDFS-8900.002.patch

 Optimize XAttr memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8900) Optimize XAttr memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706763#comment-14706763
 ] 

Yi Liu commented on HDFS-8900:
--

Thanks [~andrew.wang], I update the patch for your comments.

*About hard limit.*
{quote}
I talked with Colin Patrick McCabe about compatibility offline 
{quote}
Yes, I agree you and Colin. In fact, when I added the hard limit, the first 
thing I considered is the compatibility. I did it based on two reasons: 1). as 
you said too, it's unlikely anyone's ever changed the max size. 2). The max 
limit is to restrict user's (user/trusted) namespace xattrs, and it doesn't 
break the existing HDFS features. 
I felt it was OK to do this modification on trunk and branch-2 directly 
(Certainly we can still use 4 bytes, but I wanted to save 2 bytes from it :)). 
I see you agree with the hard limit generally, one thing is do we need to only 
have the hard limit in trunk? How about do it in the branch-2 if we think it's 
fine?

{quote}
Could we exit on setting an xattr size bigger than the hard limit, rather than 
doing a silent min? We should mention this new hard limit somewhere as well.
{quote}
Sure, updated in the new patch. User will see the hard limit from error msg. I 
updated the description of the xattr max size in hdfs-default.xml and mentioned 
the hard limit too.

{quote}
Any comment on the max size supported by other filesystems like ext4 or btrfs, 
for reference as to what's reasonable here?
{quote}
It's a long time since I read xattr of ext4 and other fs last time. I remember 
some fs have limit about the xattr length, and some fs don't have. It's detail, 
small difference should be OK.

{quote}
Typo in FSDirectory: should - should be. Also the line below it says 
unlimited but that'll never get triggered now.
{quote}
sure, I forgot to remove the unlimited...

{quote}
Regarding configuration, we could simplify by just using the hard limit. Admins 
would still have the option of disabling xattrs entirely; is there really any 
value in being able to set something smaller than 32KB? This would definitely 
make it a trunk change.
{quote}
Agree, How about doing it in a follow-on if we want later?

*More review comments:* 
{quote}
FSDirectory#addToInodeMap, do we need that new return?
{quote}
right, no need. I removed it. Maybe I added it suddenly :)

{quote}
A bit out of scope and so optional, but I think everywhere we say prefixName 
we really want to say prefixedName because prefixName sounds more like the 
name of the prefix rather than a name with a prefix.
{quote}
Good idea, {{prefixedName}} is much more better, I updated all those in the new 
patch.

{quote}
Some unrelated import changes in FSNamesystemLock and INodeAttributeProvider
{quote}
OK, I reverted this modification since they are unrelated (I saw them, I was 
thinking to remove them). 

{quote}
XAttrFeature has an extra import
{quote}
fixed

{quote}
What's the reason for switching from ImmutableList to List in some places? The 
switch is also not complete, since I still see some usages of ImmutableList. I 
remember we liked ImmutableList originally since it made the need to set very 
explicit.
{quote}
I think originally we use {{ImmutableList}} is mainly because it's immutable, 
and we keep it in {{XAttrFeature}}, we don't want outside modification affects 
it. Now it's packed to {{byte[]}} in {{XAttrFeature}}, so no need immutable 
list. 
I use ArrayList instead of ImmutableList is because when building 
ImmutableList, it needs additional list copy (from an internal ArrayList). Then 
the performance is a bit better.

{quote}
Mind adding Javadoc for SerialNumberMap, and an interface audience private 
annotation?
XAttrsFormat's class javadoc goes over 80 chars, could use interface audience 
private also.
{quote}
Sure, updated them.  One thing is {{XAttrFormat}} is {{package-private}}, so no 
need to add audience private annotation for it.

{quote}
Can we add an IllegalStateException to SerialNumberMap#get(T) for Integer 
overflow? Also there's the case that the int from the map doesn't fit in the 29 
bits in XAttrFormat; check that in XAttrsFormattoBytes?
{quote}
Sure, I also add the check in XAttrFormat#toBytes.   Actually I want to create 
a follow-on to restrict the total number of xattr names for user's (user/trust) 
xattrs. For HDFS kernel, the number of xattr names are less than 10 currently, 
but if users create many various xattrs (maybe by mistake), then it will cause 
unexpected behavior. 

{quote}
Consider also renaming XAttrsFormat to XAttrFormat, so it's named like 
XAttrStorage
{quote}
Good idea.

{quote}
Is it worthwhile to do the same dictionary encoding for the FSImage as well? If 
the # xattrs is large enough to affect memory footprint, it'd also affect 
loading times which can already be minutes. Can be a follow-on JIRA for sure.
{quote}
Actually we already have this compaction for XAttr 

[jira] [Updated] (HDFS-8900) Optimize XAttr memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8900:
-
Status: Open  (was: Patch Available)

 Optimize XAttr memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8900) Compact XAttrs to optimize memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706823#comment-14706823
 ] 

Yi Liu commented on HDFS-8900:
--

(I don't know why the Jenkins is not triggered after submitting the patch, now 
I trigger the Jenkins manually.)

 Compact XAttrs to optimize memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8388) Time and Date format need to be in sync in Namenode UI page

2015-08-21 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8388:
-
Attachment: HDFS-8388-005.patch

Attached updated patch, Please review...

 Time and Date format need to be in sync in Namenode UI page
 ---

 Key: HDFS-8388
 URL: https://issues.apache.org/jira/browse/HDFS-8388
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore
Priority: Minor
 Attachments: HDFS-8388-002.patch, HDFS-8388-003.patch, 
 HDFS-8388-004.patch, HDFS-8388-005.patch, HDFS-8388.patch, HDFS-8388_1.patch, 
 ScreenShot-InvalidDate.png


 In NameNode UI Page, Date and Time FORMAT  displayed on the page are not in 
 sync currently.
 Started:Wed May 13 12:28:02 IST 2015
 Compiled:23 Apr 2015 12:22:59 
 Block Deletion Start Time   13 May 2015 12:28:02
 We can keep a common format in all the above places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8900) Optimize XAttr memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8900:
-
Status: Patch Available  (was: Open)

 Optimize XAttr memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8900) Optimize XAttr memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8900:
-
Attachment: (was: HDFS-8900.002.patch)

 Optimize XAttr memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8900) Compact XAttrs to optimize memory footprint.

2015-08-21 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8900:
-
Summary: Compact XAttrs to optimize memory footprint.  (was: Optimize XAttr 
memory footprint.)

 Compact XAttrs to optimize memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8933) Inotify : Support Event-based filtering

2015-08-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8933:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-6634)

 Inotify : Support Event-based filtering
 ---

 Key: HDFS-8933
 URL: https://issues.apache.org/jira/browse/HDFS-8933
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode, qjm
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
 Fix For: 2.6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8933) Inotify : Support Event-based filtering

2015-08-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706846#comment-14706846
 ] 

Allen Wittenauer commented on HDFS-8933:


Converting this to an issue since the inotify umbrella jira was 
closed/committed already.

 Inotify : Support Event-based filtering
 ---

 Key: HDFS-8933
 URL: https://issues.apache.org/jira/browse/HDFS-8933
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode, qjm
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
 Fix For: 2.6.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8933) Inotify : Support Event-based filtering

2015-08-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8933:
---
Fix Version/s: (was: 2.6.0)

 Inotify : Support Event-based filtering
 ---

 Key: HDFS-8933
 URL: https://issues.apache.org/jira/browse/HDFS-8933
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode, qjm
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8932) NPE thrown in NameNode when try to get TotalSyncCount metric before editLogStream initialization

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707340#comment-14707340
 ] 

Hadoop QA commented on HDFS-8932:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m  7s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  12m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 28s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 41s | The applied patch generated  1 
new checkstyle issues (total was 383, now 383). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 52s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 42s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 11s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 42s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  83m 16s | Tests failed in hadoop-hdfs. |
| | | 137m 54s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751660/HDFS-8932.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12069/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12069/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12069/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12069/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12069/console |


This message was automatically generated.

 NPE thrown in NameNode when try to get TotalSyncCount metric before 
 editLogStream initialization
 --

 Key: HDFS-8932
 URL: https://issues.apache.org/jira/browse/HDFS-8932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
 Attachments: HDFS-8932.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8900) Compact XAttrs to optimize memory footprint.

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707359#comment-14707359
 ] 

Hadoop QA commented on HDFS-8900:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  2s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 43s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 48s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 27s | The applied patch generated  7 
new checkstyle issues (total was 516, now 517). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 4  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 34s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  8s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 128m 28s | Tests failed in hadoop-hdfs. |
| | | 173m 42s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestStartup |
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751731/HDFS-8900.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12066/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12066/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12066/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12066/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12066/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12066/console |


This message was automatically generated.

 Compact XAttrs to optimize memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8900) Compact XAttrs to optimize memory footprint.

2015-08-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8900:
--
Hadoop Flags: Incompatible change
Release Note: The config key dfs.namenode.fs-limits.max-xattr-size can no 
longer be set to a value of 0 (previously used to indicate unlimited) or a 
value greater than 32KB. This is a constraint on xattr size similar to many 
local filesystems.

 Compact XAttrs to optimize memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8853) Erasure Coding: Provide ECSchema validation when creating ECZone

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707460#comment-14707460
 ] 

Hadoop QA commented on HDFS-8853:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 32s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 37s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m  9s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 17s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 43s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 57s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 39s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 53s | The patch appears to introduce 5 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 55s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 126m 48s | Tests failed in hadoop-hdfs. |
| | | 173m 34s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestFsck |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751685/HDFS-8853-HDFS-7285-merge-04.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / b57c9a3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12071/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12071/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12071/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12071/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12071/console |


This message was automatically generated.

 Erasure Coding: Provide ECSchema validation when creating ECZone
 

 Key: HDFS-8853
 URL: https://issues.apache.org/jira/browse/HDFS-8853
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: J.Andreina
 Attachments: HDFS-8853-HDFS-7285-01.patch, 
 HDFS-8853-HDFS-7285-merge-02.patch, HDFS-8853-HDFS-7285-merge-03.patch, 
 HDFS-8853-HDFS-7285-merge-04.patch


 Presently the {{DFS#createErasureCodingZone(path, ecSchema, cellSize)}} 
 doesn't have any validation that the given {{ecSchema}} is available in 
 {{ErasureCodingSchemaManager#activeSchemas}} list. Now, if it doesn't exists 
 then will create the ECZone with {{null}} schema. IMHO we could improve this 
 by doing necessary basic sanity checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8922) Link the native_mini_dfs test library with libdl, since IBM Java requires it

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707440#comment-14707440
 ] 

Hudson commented on HDFS-8922:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/294/])
HDFS-8922. Link the native_mini_dfs test library with libdl, since IBM Java 
requires it (Ayappan via Colin P. McCabe) (cmccabe: rev 
7642f64c24961d2b4772591a0957e2699162a083)
* hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Link the native_mini_dfs test library with libdl, since IBM Java requires it
 

 Key: HDFS-8922
 URL: https://issues.apache.org/jira/browse/HDFS-8922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.1
 Environment: IBM Java RHEL7.1 
Reporter: Ayappan
Assignee: Ayappan
 Fix For: 2.8.0

 Attachments: HDFS-8922.patch


 Building hadoop-hdfs-project with -Pnative option using IBM Java fails with 
 the following error
 [exec] Linking C executable test_native_mini_dfs
  [exec] /usr/bin/cmake -E cmake_link_script 
 CMakeFiles/test_native_mini_dfs.dir/link.txt --verbose=1
  [exec] /usr/bin/cc   -g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fvisibility=hidden
 CMakeFiles/test_native_mini_dfs.dir/main/native/libhdfs/test_native_mini_dfs.c.o
   -o test_native_mini_dfs -rdynamic libnative_mini_dfs.a 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so -lpthread 
 -Wl,-rpath,/home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic
  [exec] make[2]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] make[1]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlopen'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlclose'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlerror'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlsym'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dladdr'
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [test_native_mini_dfs] Error 1
  [exec] make[1]: *** [CMakeFiles/test_native_mini_dfs.dir/all] Error 2
  [exec] make: *** [all] Error 2
 It seems like the IBM jvm requires libdl for linking in native_mini_dfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8828) Utilize Snapshot diff report to build diff copy list in distcp

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707441#comment-14707441
 ] 

Hudson commented on HDFS-8828:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/294/])
HDFS-8828. Utilize Snapshot diff report to build diff copy list in distcp. 
(Yufei Gu via Yongjun Zhang) (yzhang: rev 
0bc15cb6e60dc60885234e01dec1c7cb4557a926)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DiffInfo.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java


 Utilize Snapshot diff report to build diff copy list in distcp
 --

 Key: HDFS-8828
 URL: https://issues.apache.org/jira/browse/HDFS-8828
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Yufei Gu
Assignee: Yufei Gu
 Fix For: 2.8.0

 Attachments: HDFS-8828.001.patch, HDFS-8828.002.patch, 
 HDFS-8828.003.patch, HDFS-8828.004.patch, HDFS-8828.005.patch, 
 HDFS-8828.006.patch, HDFS-8828.007.patch, HDFS-8828.008.patch, 
 HDFS-8828.009.patch, HDFS-8828.010.patch, HDFS-8828.011.patch


 Some users reported huge time cost to build file copy list in distcp. (30 
 hours for 1.6M files). We can leverage snapshot diff report to build file 
 copy list including files/dirs which are changes only between two snapshots 
 (or a snapshot and a normal dir). It speed up the process in two folds: 1. 
 less copy list building time. 2. less file copy MR jobs.
 HDFS snapshot diff report provide information about file/directory creation, 
 deletion, rename and modification between two snapshots or a snapshot and a 
 normal directory. HDFS-7535 synchronize deletion and rename, then fallback to 
 the default distcp. So it still relies on default distcp to building complete 
 list of files under the source dir. This patch only puts creation and 
 modification files into the copy list based on snapshot diff report. We can 
 minimize the number of files to copy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8809) HDFS fsck reports under construction blocks as CORRUPT

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707439#comment-14707439
 ] 

Hudson commented on HDFS-8809:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/294/])
HDFS-8809. HDFS fsck reports under construction blocks as CORRUPT. Contributed 
by Jing Zhao. (jing9: rev c8bca62718203a1dad9b70d164bdf10cc71b40cd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java


 HDFS fsck reports under construction blocks as CORRUPT
 

 Key: HDFS-8809
 URL: https://issues.apache.org/jira/browse/HDFS-8809
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
 Environment: Hadoop 2.7.1 and HBase 1.1.1, on SUSE11sp3 (other 
 Linuxes not tested, probably not platform-dependent).  This did NOT happen 
 with Hadoop 2.4 and HBase 0.98.
Reporter: Sudhir Prakash
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8809.000.patch


 Whenever HBase is running, the hdfs fsck /  reports four hbase-related 
 files in the path hbase/data/WALs/ as CORRUPT. Even after letting the 
 cluster sit idle for a couple hours, it is still in the corrupt state.  If 
 HBase is shut down, the problem goes away.  If HBase is then restarted, the 
 problem recurs.  This was observed with Hadoop 2.7.1 and HBase 1.1.1, and did 
 NOT happen with Hadoop 2.4 and HBase 0.98.
 {code}
 hades1:/var/opt/teradata/packages # su hdfs
 hdfs@hades1:/var/opt/teradata/packages hdfs fsck /
 Connecting to namenode via 
 http://hades1.labs.teradata.com:50070/fsck?ugi=hdfspath=%2F
 FSCK started by hdfs (auth:SIMPLE) from /39.0.8.2 for path / at Wed Jun 24 
 20:40:17 GMT 2015
 ...
 /apps/hbase/data/WALs/hades4.labs.teradata.com,16020,1435168292684/hades4.labs.teradata.com%2C16020%2C1435168292684.default.1435175500556:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466..meta.1435175562144.meta:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466.default.1435175498500:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades6.labs.teradata.com,16020,1435168292373/hades6.labs.teradata.com%2C16020%2C1435168292373.default.1435175500301:
  MISSING 1 blocks of total size 83 
 B..
 
 
 Status:
  CORRUPT
  Total size:723977553 B (Total open files size: 332 B)
  Total dirs:79
  Total files:   388
  Total symlinks:0 (Files currently being written: 5)
  Total blocks (validated):  387 (avg. block size 1870743 B) (Total open 
 file blocks (not validated): 4)
   
   UNDER MIN REPL'D BLOCKS:  4 (1.0335917 %)
   dfs.namenode.replication.min: 1
   CORRUPT FILES:4
   MISSING BLOCKS:   4
   MISSING SIZE: 332 B
   
  Minimally replicated blocks:   387 (100.0 %)
  Over-replicated blocks:0 (0.0 %)
  Under-replicated blocks:   0 (0.0 %)
  Mis-replicated blocks: 0 (0.0 %)
  Default replication factor:3
  Average block replication: 3.0
  Corrupt blocks:0
  Missing replicas:  0 (0.0 %)
  Number of data-nodes:  3
  Number of racks:   1
 FSCK ended at Wed Jun 24 20:40:17 GMT 2015 in 7 milliseconds
 The filesystem under path '/' is CORRUPT
 hdfs@hades1:/var/opt/teradata/packages
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707442#comment-14707442
 ] 

Hudson commented on HDFS-8891:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/294/])
HDFS-8891. HDFS concat should keep srcs order. Contributed by Yong Zhang. 
(cdouglas: rev b0564c9f3c501bf7806f07649929038624dea10f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS concat should keep srcs order
 --

 Key: HDFS-8891
 URL: https://issues.apache.org/jira/browse/HDFS-8891
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yong Zhang
Assignee: Yong Zhang
Priority: Blocker
 Fix For: 2.7.2

 Attachments: HDFS-8891.001.patch, HDFS-8891.002.patch


 FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
 order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8863) The remaining space check in BlockPlacementPolicyDefault is flawed

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707444#comment-14707444
 ] 

Hudson commented on HDFS-8863:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/294/])
HDFS-8863. The remaining space check in BlockPlacementPolicyDefault is flawed. 
(Kihwal Lee via yliu) (yliu: rev 5e8fe8943718309b5e39a794360aebccae28b331)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 The remaining space check in BlockPlacementPolicyDefault is flawed
 --

 Key: HDFS-8863
 URL: https://issues.apache.org/jira/browse/HDFS-8863
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
  Labels: 2.6.1-candidate
 Fix For: 2.7.2

 Attachments: HDFS-8863.patch, HDFS-8863.v2.patch, HDFS-8863.v3.patch


 The block placement policy calls 
 {{DatanodeDescriptor#getRemaining(StorageType to check whether the block 
 is going to fit. Since the method is adding up all remaining spaces, namenode 
 can allocate a new block on a full node. This causes pipeline construction 
 failure and {{abandonBlock}}. If the cluster is nearly full, the client might 
 hit this multiple times and the write can fail permanently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707446#comment-14707446
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/294/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8934) Move ShortCircuitShm to hdfs-client

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707466#comment-14707466
 ] 

Hadoop QA commented on HDFS-8934:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m 19s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   3m  1s | The applied patch generated  
169 new checkstyle issues (total was 0, now 169). |
| {color:red}-1{color} | whitespace |   0m  5s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   2m  7s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 43s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   6m 53s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 38s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 105m 12s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 165m 14s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.TestPread |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestFileTruncate |
|   | org.apache.hadoop.hdfs.server.namenode.TestDeleteRace |
|   | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751667/HDFS-8934.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12073/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12073/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12073/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12073/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12073/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12073/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12073/console |


This message was automatically generated.

 Move ShortCircuitShm to hdfs-client
 ---

 Key: HDFS-8934
 URL: https://issues.apache.org/jira/browse/HDFS-8934
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Mingliang Liu
Assignee: Mingliang Liu
 Fix For: 2.8.0

 Attachments: HDFS-8934.000.patch


 This jira tracks the effort of moving the {{ShortCircuitShm}} class into the 
 hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8828) Utilize Snapshot diff report to build diff copy list in distcp

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707317#comment-14707317
 ] 

Hudson commented on HDFS-8828:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #291 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/291/])
HDFS-8828. Utilize Snapshot diff report to build diff copy list in distcp. 
(Yufei Gu via Yongjun Zhang) (yzhang: rev 
0bc15cb6e60dc60885234e01dec1c7cb4557a926)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DiffInfo.java
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java


 Utilize Snapshot diff report to build diff copy list in distcp
 --

 Key: HDFS-8828
 URL: https://issues.apache.org/jira/browse/HDFS-8828
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Yufei Gu
Assignee: Yufei Gu
 Fix For: 2.8.0

 Attachments: HDFS-8828.001.patch, HDFS-8828.002.patch, 
 HDFS-8828.003.patch, HDFS-8828.004.patch, HDFS-8828.005.patch, 
 HDFS-8828.006.patch, HDFS-8828.007.patch, HDFS-8828.008.patch, 
 HDFS-8828.009.patch, HDFS-8828.010.patch, HDFS-8828.011.patch


 Some users reported huge time cost to build file copy list in distcp. (30 
 hours for 1.6M files). We can leverage snapshot diff report to build file 
 copy list including files/dirs which are changes only between two snapshots 
 (or a snapshot and a normal dir). It speed up the process in two folds: 1. 
 less copy list building time. 2. less file copy MR jobs.
 HDFS snapshot diff report provide information about file/directory creation, 
 deletion, rename and modification between two snapshots or a snapshot and a 
 normal directory. HDFS-7535 synchronize deletion and rename, then fallback to 
 the default distcp. So it still relies on default distcp to building complete 
 list of files under the source dir. This patch only puts creation and 
 modification files into the copy list based on snapshot diff report. We can 
 minimize the number of files to copy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8922) Link the native_mini_dfs test library with libdl, since IBM Java requires it

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707316#comment-14707316
 ] 

Hudson commented on HDFS-8922:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #291 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/291/])
HDFS-8922. Link the native_mini_dfs test library with libdl, since IBM Java 
requires it (Ayappan via Colin P. McCabe) (cmccabe: rev 
7642f64c24961d2b4772591a0957e2699162a083)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt


 Link the native_mini_dfs test library with libdl, since IBM Java requires it
 

 Key: HDFS-8922
 URL: https://issues.apache.org/jira/browse/HDFS-8922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.1
 Environment: IBM Java RHEL7.1 
Reporter: Ayappan
Assignee: Ayappan
 Fix For: 2.8.0

 Attachments: HDFS-8922.patch


 Building hadoop-hdfs-project with -Pnative option using IBM Java fails with 
 the following error
 [exec] Linking C executable test_native_mini_dfs
  [exec] /usr/bin/cmake -E cmake_link_script 
 CMakeFiles/test_native_mini_dfs.dir/link.txt --verbose=1
  [exec] /usr/bin/cc   -g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fvisibility=hidden
 CMakeFiles/test_native_mini_dfs.dir/main/native/libhdfs/test_native_mini_dfs.c.o
   -o test_native_mini_dfs -rdynamic libnative_mini_dfs.a 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so -lpthread 
 -Wl,-rpath,/home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic
  [exec] make[2]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] make[1]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlopen'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlclose'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlerror'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlsym'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dladdr'
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [test_native_mini_dfs] Error 1
  [exec] make[1]: *** [CMakeFiles/test_native_mini_dfs.dir/all] Error 2
  [exec] make: *** [all] Error 2
 It seems like the IBM jvm requires libdl for linking in native_mini_dfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8809) HDFS fsck reports under construction blocks as CORRUPT

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707315#comment-14707315
 ] 

Hudson commented on HDFS-8809:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #291 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/291/])
HDFS-8809. HDFS fsck reports under construction blocks as CORRUPT. Contributed 
by Jing Zhao. (jing9: rev c8bca62718203a1dad9b70d164bdf10cc71b40cd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 HDFS fsck reports under construction blocks as CORRUPT
 

 Key: HDFS-8809
 URL: https://issues.apache.org/jira/browse/HDFS-8809
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
 Environment: Hadoop 2.7.1 and HBase 1.1.1, on SUSE11sp3 (other 
 Linuxes not tested, probably not platform-dependent).  This did NOT happen 
 with Hadoop 2.4 and HBase 0.98.
Reporter: Sudhir Prakash
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8809.000.patch


 Whenever HBase is running, the hdfs fsck /  reports four hbase-related 
 files in the path hbase/data/WALs/ as CORRUPT. Even after letting the 
 cluster sit idle for a couple hours, it is still in the corrupt state.  If 
 HBase is shut down, the problem goes away.  If HBase is then restarted, the 
 problem recurs.  This was observed with Hadoop 2.7.1 and HBase 1.1.1, and did 
 NOT happen with Hadoop 2.4 and HBase 0.98.
 {code}
 hades1:/var/opt/teradata/packages # su hdfs
 hdfs@hades1:/var/opt/teradata/packages hdfs fsck /
 Connecting to namenode via 
 http://hades1.labs.teradata.com:50070/fsck?ugi=hdfspath=%2F
 FSCK started by hdfs (auth:SIMPLE) from /39.0.8.2 for path / at Wed Jun 24 
 20:40:17 GMT 2015
 ...
 /apps/hbase/data/WALs/hades4.labs.teradata.com,16020,1435168292684/hades4.labs.teradata.com%2C16020%2C1435168292684.default.1435175500556:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466..meta.1435175562144.meta:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466.default.1435175498500:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades6.labs.teradata.com,16020,1435168292373/hades6.labs.teradata.com%2C16020%2C1435168292373.default.1435175500301:
  MISSING 1 blocks of total size 83 
 B..
 
 
 Status:
  CORRUPT
  Total size:723977553 B (Total open files size: 332 B)
  Total dirs:79
  Total files:   388
  Total symlinks:0 (Files currently being written: 5)
  Total blocks (validated):  387 (avg. block size 1870743 B) (Total open 
 file blocks (not validated): 4)
   
   UNDER MIN REPL'D BLOCKS:  4 (1.0335917 %)
   dfs.namenode.replication.min: 1
   CORRUPT FILES:4
   MISSING BLOCKS:   4
   MISSING SIZE: 332 B
   
  Minimally replicated blocks:   387 (100.0 %)
  Over-replicated blocks:0 (0.0 %)
  Under-replicated blocks:   0 (0.0 %)
  Mis-replicated blocks: 0 (0.0 %)
  Default replication factor:3
  Average block replication: 3.0
  Corrupt blocks:0
  Missing replicas:  0 (0.0 %)
  Number of data-nodes:  3
  Number of racks:   1
 FSCK ended at Wed Jun 24 20:40:17 GMT 2015 in 7 milliseconds
 The filesystem under path '/' is CORRUPT
 hdfs@hades1:/var/opt/teradata/packages
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707318#comment-14707318
 ] 

Hudson commented on HDFS-8891:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #291 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/291/])
HDFS-8891. HDFS concat should keep srcs order. Contributed by Yong Zhang. 
(cdouglas: rev b0564c9f3c501bf7806f07649929038624dea10f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS concat should keep srcs order
 --

 Key: HDFS-8891
 URL: https://issues.apache.org/jira/browse/HDFS-8891
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yong Zhang
Assignee: Yong Zhang
Priority: Blocker
 Fix For: 2.7.2

 Attachments: HDFS-8891.001.patch, HDFS-8891.002.patch


 FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
 order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8863) The remaining space check in BlockPlacementPolicyDefault is flawed

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707320#comment-14707320
 ] 

Hudson commented on HDFS-8863:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #291 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/291/])
HDFS-8863. The remaining space check in BlockPlacementPolicyDefault is flawed. 
(Kihwal Lee via yliu) (yliu: rev 5e8fe8943718309b5e39a794360aebccae28b331)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


 The remaining space check in BlockPlacementPolicyDefault is flawed
 --

 Key: HDFS-8863
 URL: https://issues.apache.org/jira/browse/HDFS-8863
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
  Labels: 2.6.1-candidate
 Fix For: 2.7.2

 Attachments: HDFS-8863.patch, HDFS-8863.v2.patch, HDFS-8863.v3.patch


 The block placement policy calls 
 {{DatanodeDescriptor#getRemaining(StorageType to check whether the block 
 is going to fit. Since the method is adding up all remaining spaces, namenode 
 can allocate a new block on a full node. This causes pipeline construction 
 failure and {{abandonBlock}}. If the cluster is nearly full, the client might 
 hit this multiple times and the write can fail permanently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8809) HDFS fsck reports under construction blocks as CORRUPT

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707345#comment-14707345
 ] 

Hudson commented on HDFS-8809:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1024 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1024/])
HDFS-8809. HDFS fsck reports under construction blocks as CORRUPT. Contributed 
by Jing Zhao. (jing9: rev c8bca62718203a1dad9b70d164bdf10cc71b40cd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS fsck reports under construction blocks as CORRUPT
 

 Key: HDFS-8809
 URL: https://issues.apache.org/jira/browse/HDFS-8809
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
 Environment: Hadoop 2.7.1 and HBase 1.1.1, on SUSE11sp3 (other 
 Linuxes not tested, probably not platform-dependent).  This did NOT happen 
 with Hadoop 2.4 and HBase 0.98.
Reporter: Sudhir Prakash
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8809.000.patch


 Whenever HBase is running, the hdfs fsck /  reports four hbase-related 
 files in the path hbase/data/WALs/ as CORRUPT. Even after letting the 
 cluster sit idle for a couple hours, it is still in the corrupt state.  If 
 HBase is shut down, the problem goes away.  If HBase is then restarted, the 
 problem recurs.  This was observed with Hadoop 2.7.1 and HBase 1.1.1, and did 
 NOT happen with Hadoop 2.4 and HBase 0.98.
 {code}
 hades1:/var/opt/teradata/packages # su hdfs
 hdfs@hades1:/var/opt/teradata/packages hdfs fsck /
 Connecting to namenode via 
 http://hades1.labs.teradata.com:50070/fsck?ugi=hdfspath=%2F
 FSCK started by hdfs (auth:SIMPLE) from /39.0.8.2 for path / at Wed Jun 24 
 20:40:17 GMT 2015
 ...
 /apps/hbase/data/WALs/hades4.labs.teradata.com,16020,1435168292684/hades4.labs.teradata.com%2C16020%2C1435168292684.default.1435175500556:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466..meta.1435175562144.meta:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466.default.1435175498500:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades6.labs.teradata.com,16020,1435168292373/hades6.labs.teradata.com%2C16020%2C1435168292373.default.1435175500301:
  MISSING 1 blocks of total size 83 
 B..
 
 
 Status:
  CORRUPT
  Total size:723977553 B (Total open files size: 332 B)
  Total dirs:79
  Total files:   388
  Total symlinks:0 (Files currently being written: 5)
  Total blocks (validated):  387 (avg. block size 1870743 B) (Total open 
 file blocks (not validated): 4)
   
   UNDER MIN REPL'D BLOCKS:  4 (1.0335917 %)
   dfs.namenode.replication.min: 1
   CORRUPT FILES:4
   MISSING BLOCKS:   4
   MISSING SIZE: 332 B
   
  Minimally replicated blocks:   387 (100.0 %)
  Over-replicated blocks:0 (0.0 %)
  Under-replicated blocks:   0 (0.0 %)
  Mis-replicated blocks: 0 (0.0 %)
  Default replication factor:3
  Average block replication: 3.0
  Corrupt blocks:0
  Missing replicas:  0 (0.0 %)
  Number of data-nodes:  3
  Number of racks:   1
 FSCK ended at Wed Jun 24 20:40:17 GMT 2015 in 7 milliseconds
 The filesystem under path '/' is CORRUPT
 hdfs@hades1:/var/opt/teradata/packages
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707348#comment-14707348
 ] 

Hudson commented on HDFS-8891:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1024 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1024/])
HDFS-8891. HDFS concat should keep srcs order. Contributed by Yong Zhang. 
(cdouglas: rev b0564c9f3c501bf7806f07649929038624dea10f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS concat should keep srcs order
 --

 Key: HDFS-8891
 URL: https://issues.apache.org/jira/browse/HDFS-8891
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yong Zhang
Assignee: Yong Zhang
Priority: Blocker
 Fix For: 2.7.2

 Attachments: HDFS-8891.001.patch, HDFS-8891.002.patch


 FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
 order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8922) Link the native_mini_dfs test library with libdl, since IBM Java requires it

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707346#comment-14707346
 ] 

Hudson commented on HDFS-8922:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1024 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1024/])
HDFS-8922. Link the native_mini_dfs test library with libdl, since IBM Java 
requires it (Ayappan via Colin P. McCabe) (cmccabe: rev 
7642f64c24961d2b4772591a0957e2699162a083)
* hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Link the native_mini_dfs test library with libdl, since IBM Java requires it
 

 Key: HDFS-8922
 URL: https://issues.apache.org/jira/browse/HDFS-8922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.1
 Environment: IBM Java RHEL7.1 
Reporter: Ayappan
Assignee: Ayappan
 Fix For: 2.8.0

 Attachments: HDFS-8922.patch


 Building hadoop-hdfs-project with -Pnative option using IBM Java fails with 
 the following error
 [exec] Linking C executable test_native_mini_dfs
  [exec] /usr/bin/cmake -E cmake_link_script 
 CMakeFiles/test_native_mini_dfs.dir/link.txt --verbose=1
  [exec] /usr/bin/cc   -g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fvisibility=hidden
 CMakeFiles/test_native_mini_dfs.dir/main/native/libhdfs/test_native_mini_dfs.c.o
   -o test_native_mini_dfs -rdynamic libnative_mini_dfs.a 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so -lpthread 
 -Wl,-rpath,/home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic
  [exec] make[2]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] make[1]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlopen'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlclose'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlerror'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlsym'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dladdr'
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [test_native_mini_dfs] Error 1
  [exec] make[1]: *** [CMakeFiles/test_native_mini_dfs.dir/all] Error 2
  [exec] make: *** [all] Error 2
 It seems like the IBM jvm requires libdl for linking in native_mini_dfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8828) Utilize Snapshot diff report to build diff copy list in distcp

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707347#comment-14707347
 ] 

Hudson commented on HDFS-8828:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1024 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1024/])
HDFS-8828. Utilize Snapshot diff report to build diff copy list in distcp. 
(Yufei Gu via Yongjun Zhang) (yzhang: rev 
0bc15cb6e60dc60885234e01dec1c7cb4557a926)
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DiffInfo.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java


 Utilize Snapshot diff report to build diff copy list in distcp
 --

 Key: HDFS-8828
 URL: https://issues.apache.org/jira/browse/HDFS-8828
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Yufei Gu
Assignee: Yufei Gu
 Fix For: 2.8.0

 Attachments: HDFS-8828.001.patch, HDFS-8828.002.patch, 
 HDFS-8828.003.patch, HDFS-8828.004.patch, HDFS-8828.005.patch, 
 HDFS-8828.006.patch, HDFS-8828.007.patch, HDFS-8828.008.patch, 
 HDFS-8828.009.patch, HDFS-8828.010.patch, HDFS-8828.011.patch


 Some users reported huge time cost to build file copy list in distcp. (30 
 hours for 1.6M files). We can leverage snapshot diff report to build file 
 copy list including files/dirs which are changes only between two snapshots 
 (or a snapshot and a normal dir). It speed up the process in two folds: 1. 
 less copy list building time. 2. less file copy MR jobs.
 HDFS snapshot diff report provide information about file/directory creation, 
 deletion, rename and modification between two snapshots or a snapshot and a 
 normal directory. HDFS-7535 synchronize deletion and rename, then fallback to 
 the default distcp. So it still relies on default distcp to building complete 
 list of files under the source dir. This patch only puts creation and 
 modification files into the copy list based on snapshot diff report. We can 
 minimize the number of files to copy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8863) The remaining space check in BlockPlacementPolicyDefault is flawed

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707350#comment-14707350
 ] 

Hudson commented on HDFS-8863:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1024 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1024/])
HDFS-8863. The remaining space check in BlockPlacementPolicyDefault is flawed. 
(Kihwal Lee via yliu) (yliu: rev 5e8fe8943718309b5e39a794360aebccae28b331)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


 The remaining space check in BlockPlacementPolicyDefault is flawed
 --

 Key: HDFS-8863
 URL: https://issues.apache.org/jira/browse/HDFS-8863
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
  Labels: 2.6.1-candidate
 Fix For: 2.7.2

 Attachments: HDFS-8863.patch, HDFS-8863.v2.patch, HDFS-8863.v3.patch


 The block placement policy calls 
 {{DatanodeDescriptor#getRemaining(StorageType to check whether the block 
 is going to fit. Since the method is adding up all remaining spaces, namenode 
 can allocate a new block on a full node. This causes pipeline construction 
 failure and {{abandonBlock}}. If the cluster is nearly full, the client might 
 hit this multiple times and the write can fail permanently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707352#comment-14707352
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1024 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1024/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8900) Compact XAttrs to optimize memory footprint.

2015-08-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707418#comment-14707418
 ] 

Andrew Wang commented on HDFS-8900:
---

Latest rev LGTM, +1 pending. I'm okay with this in branch-2 as discussed above, 
we can remove the max size config in a trunk patch.

[~cmccabe] cool with you too?

 Compact XAttrs to optimize memory footprint.
 

 Key: HDFS-8900
 URL: https://issues.apache.org/jira/browse/HDFS-8900
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8900.001.patch, HDFS-8900.002.patch


 {code}
 private final ImmutableListXAttr xAttrs;
 {code}
 Currently we use above in XAttrFeature, it's not efficient from memory point 
 of view, since {{ImmutableList}} and {{XAttr}} have object memory overhead, 
 and each object has memory alignment. 
 We can use a {{byte[]}} in XAttrFeature and do some compact in {{XAttr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8486) DN startup may cause severe data loss

2015-08-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707163#comment-14707163
 ] 

Chris Nauroth commented on HDFS-8486:
-

+1 for the addendum patch.  Thank you, Arpit.

 DN startup may cause severe data loss
 -

 Key: HDFS-8486
 URL: https://issues.apache.org/jira/browse/HDFS-8486
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
  Labels: 2.6.1-candidate
 Fix For: 2.6.1, 2.7.1

 Attachments: HDFS-8486-branch-2.6.02.patch, 
 HDFS-8486-branch-2.6.addendum.patch, HDFS-8486-branch-2.6.patch, 
 HDFS-8486.patch, HDFS-8486.patch


 A race condition between block pool initialization and the directory scanner 
 may cause a mass deletion of blocks in multiple storages.
 If block pool initialization finds a block on disk that is already in the 
 replica map, it deletes one of the blocks based on size, GS, etc.  
 Unfortunately it _always_ deletes one of the blocks even if identical, thus 
 the replica map _must_ be empty when the pool is initialized.
 The directory scanner starts at a random time within its periodic interval 
 (default 6h).  If the scanner starts very early it races to populate the 
 replica map, causing the block pool init to erroneously delete blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8846) Create edit log files with old layout version for upgrade testing

2015-08-21 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707164#comment-14707164
 ] 

Colin Patrick McCabe commented on HDFS-8846:


Looks good overall!  Can we test reading a few different events rather than 
just the CreateEvent?  And perhaps also test reading by transaction ID.  (i.e. 
if we poll a transaction ID that's too high, we get nothing... if we look for 
one that exists in the old edit logs, we start right there).  +1 once that's 
addressed

 Create edit log files with old layout version for upgrade testing
 -

 Key: HDFS-8846
 URL: https://issues.apache.org/jira/browse/HDFS-8846
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.1
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8846.00.patch, HDFS-8846.01.patch, 
 HDFS-8846.02.patch


 Per discussion under HDFS-8480, we should create some edit log files with old 
 layout version, to test whether they can be correctly handled in upgrades.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8865) Improve quota initialization performance

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707281#comment-14707281
 ] 

Hadoop QA commented on HDFS-8865:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 50s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   9m 22s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 42s | The applied patch generated  1 
new checkstyle issues (total was 471, now 469). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 55s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 46s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 15s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 42s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  80m 10s | Tests failed in hadoop-hdfs. |
| | | 131m 57s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | org.apache.hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751597/HDFS-8865.v3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12067/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12067/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12067/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12067/console |


This message was automatically generated.

 Improve quota initialization performance
 

 Key: HDFS-8865
 URL: https://issues.apache.org/jira/browse/HDFS-8865
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-8865.patch, HDFS-8865.v2.checkstyle.patch, 
 HDFS-8865.v2.patch, HDFS-8865.v3.patch


 After replaying edits, the whole file system tree is recursively scanned in 
 order to initialize the quota. For big name space, this can take a very long 
 time.  Since this is done during namenode failover, it also affects failover 
 latency.
 By using the Fork-Join framework, I was able to greatly reduce the 
 initialization time.  The following is the test result using the fsimage from 
 one of the big name nodes we have.
 || threads || seconds||
 | 1 (existing) | 55|
 | 1 (fork-join) | 68 |
 | 4 | 16 |
 | 8 | 8 |
 | 12 | 6 |
 | 16 | 5 |
 | 20 | 4 |



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8932) NPE thrown in NameNode when try to get TotalSyncCount metric before editLogStream initialization

2015-08-21 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707208#comment-14707208
 ] 

Surendra Singh Lilhore commented on HDFS-8932:
--

Thanks [~anu] for review..

 NPE thrown in NameNode when try to get TotalSyncCount metric before 
 editLogStream initialization
 --

 Key: HDFS-8932
 URL: https://issues.apache.org/jira/browse/HDFS-8932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
 Attachments: HDFS-8932.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8486) DN startup may cause severe data loss

2015-08-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707204#comment-14707204
 ] 

Arpit Agarwal commented on HDFS-8486:
-

Thanks Chris, pushed to branch-2.6.

 DN startup may cause severe data loss
 -

 Key: HDFS-8486
 URL: https://issues.apache.org/jira/browse/HDFS-8486
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
  Labels: 2.6.1-candidate
 Fix For: 2.6.1, 2.7.1

 Attachments: HDFS-8486-branch-2.6.02.patch, 
 HDFS-8486-branch-2.6.addendum.patch, HDFS-8486-branch-2.6.patch, 
 HDFS-8486.patch, HDFS-8486.patch


 A race condition between block pool initialization and the directory scanner 
 may cause a mass deletion of blocks in multiple storages.
 If block pool initialization finds a block on disk that is already in the 
 replica map, it deletes one of the blocks based on size, GS, etc.  
 Unfortunately it _always_ deletes one of the blocks even if identical, thus 
 the replica map _must_ be empty when the pool is initialized.
 The directory scanner starts at a random time within its periodic interval 
 (default 6h).  If the scanner starts very early it races to populate the 
 replica map, causing the block pool init to erroneously delete blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks

2015-08-21 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707223#comment-14707223
 ] 

Ravi Prakash commented on HDFS-8344:


Hi Haohui!
Could you please review the patch?



 NameNode doesn't recover lease for files with missing blocks
 

 Key: HDFS-8344
 URL: https://issues.apache.org/jira/browse/HDFS-8344
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 2.8.0

 Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch, 
 HDFS-8344.03.patch, HDFS-8344.04.patch, HDFS-8344.05.patch, 
 HDFS-8344.06.patch, HDFS-8344.07.patch, HDFS-8344.08.patch, HDFS-8344.09.patch


 I found another\(?) instance in which the lease is not recovered. This is 
 reproducible easily on a pseudo-distributed single node cluster
 # Before you start it helps if you set. This is not necessary, but simply 
 reduces how long you have to wait
 {code}
   public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000;
   public static final long LEASE_HARDLIMIT_PERIOD = 2 * 
 LEASE_SOFTLIMIT_PERIOD;
 {code}
 # Client starts to write a file. (could be less than 1 block, but it hflushed 
 so some of the data has landed on the datanodes) (I'm copying the client code 
 I am using. I generate a jar and run it using $ hadoop jar TestHadoop.jar)
 # Client crashes. (I simulate this by kill -9 the $(hadoop jar 
 TestHadoop.jar) process after it has printed Wrote to the bufferedWriter
 # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was 
 only 1)
 I believe the lease should be recovered and the block should be marked 
 missing. However this is not happening. The lease is never recovered.
 The effect of this bug for us was that nodes could not be decommissioned 
 cleanly. Although we knew that the client had crashed, the Namenode never 
 released the leases (even after restarting the Namenode) (even months 
 afterwards). There are actually several other cases too where we don't 
 consider what happens if ALL the datanodes die while the file is being 
 written, but I am going to punt on that for another time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8932) NPE thrown in NameNode when try to get TotalSyncCount metric before editLogStream initialization

2015-08-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707040#comment-14707040
 ] 

Anu Engineer commented on HDFS-8932:


+1, thanks for catching this and fixing it quickly.

 NPE thrown in NameNode when try to get TotalSyncCount metric before 
 editLogStream initialization
 --

 Key: HDFS-8932
 URL: https://issues.apache.org/jira/browse/HDFS-8932
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
 Attachments: HDFS-8932.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6697) Make NN lease soft and hard limits configurable

2015-08-21 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707225#comment-14707225
 ] 

Ravi Prakash commented on HDFS-6697:


And just to clarify [~wheat9]'s concern 
https://issues.apache.org/jira/browse/HDFS-8344?focusedCommentId=14634172page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14634172
bq. I'm concerned with the complexity associated with the commit as well as the 
difficulty for the users to choose the right configuration. It's an internal 
implementation detail and it should not be exposed to users whenever it's 
possible. We intentionally keep the soft and hard limit not configurable to 
avoid the users shooting their foot.

That could be said for a lot of existing configuration, so I am not sure I buy 
the argument. But I just thought of chiming in here to help us all make an 
informed decision.

 Make NN lease soft and hard limits configurable
 ---

 Key: HDFS-6697
 URL: https://issues.apache.org/jira/browse/HDFS-6697
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: J.Andreina
 Attachments: HDFS-6697.1.patch, HDFS-6697.2.patch, HDFS-6697.3.patch


 For testing, NameNodeAdapter allows test code to specify lease soft and hard 
 limit via setLeasePeriod directly on LeaseManager. But NamenodeProxies.java 
 still use the default values.
  
 It is useful if we can make NN lease soft and hard limit configurable via 
 Configuration. That will allow NamenodeProxies.java to use the configured 
 values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6939) Support path-based filtering of inotify events

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707259#comment-14707259
 ] 

Hadoop QA commented on HDFS-6939:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 15s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 54s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 28s | The applied patch generated  1 
new checkstyle issues (total was 3, now 4). |
| {color:red}-1{color} | whitespace |   0m  5s | The patch has 13  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 37s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  39m 59s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 27s | Tests passed in 
hadoop-hdfs-client. |
| | |  90m 17s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.TestHttpPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.datanode.TestStorageReport |
|   | hadoop.hdfs.server.namenode.TestGenericJournalConf |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.TestFetchImage |
|   | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.server.namenode.TestGetBlockLocations |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters |
|   | hadoop.hdfs.server.namenode.TestNameNodeResourceChecker |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSOutputStream |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.TestDFSConfigKeys |
|   | hadoop.hdfs.TestClientBlockVerification |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
|   | hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot |
|   | hadoop.hdfs.server.namenode.TestCreateEditsLog |
|   | hadoop.tools.TestJMXGet |
|   | hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA |
|   | hadoop.hdfs.TestParallelRead |
|   | hadoop.hdfs.TestHDFSTrash |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
|   | hadoop.hdfs.server.namenode.TestQuotaByStorageType |
|   | hadoop.hdfs.server.namenode.TestBlockUnderConstruction |
|   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.TestDFSUtil |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.crypto.TestHdfsCryptoStreams |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestAddBlockRetry |
|   | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   | 

[jira] [Commented] (HDFS-8828) Utilize Snapshot diff report to build diff copy list in distcp

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707587#comment-14707587
 ] 

Hudson commented on HDFS-8828:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/283/])
HDFS-8828. Utilize Snapshot diff report to build diff copy list in distcp. 
(Yufei Gu via Yongjun Zhang) (yzhang: rev 
0bc15cb6e60dc60885234e01dec1c7cb4557a926)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DiffInfo.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java


 Utilize Snapshot diff report to build diff copy list in distcp
 --

 Key: HDFS-8828
 URL: https://issues.apache.org/jira/browse/HDFS-8828
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Yufei Gu
Assignee: Yufei Gu
 Fix For: 2.8.0

 Attachments: HDFS-8828.001.patch, HDFS-8828.002.patch, 
 HDFS-8828.003.patch, HDFS-8828.004.patch, HDFS-8828.005.patch, 
 HDFS-8828.006.patch, HDFS-8828.007.patch, HDFS-8828.008.patch, 
 HDFS-8828.009.patch, HDFS-8828.010.patch, HDFS-8828.011.patch


 Some users reported huge time cost to build file copy list in distcp. (30 
 hours for 1.6M files). We can leverage snapshot diff report to build file 
 copy list including files/dirs which are changes only between two snapshots 
 (or a snapshot and a normal dir). It speed up the process in two folds: 1. 
 less copy list building time. 2. less file copy MR jobs.
 HDFS snapshot diff report provide information about file/directory creation, 
 deletion, rename and modification between two snapshots or a snapshot and a 
 normal directory. HDFS-7535 synchronize deletion and rename, then fallback to 
 the default distcp. So it still relies on default distcp to building complete 
 list of files under the source dir. This patch only puts creation and 
 modification files into the copy list based on snapshot diff report. We can 
 minimize the number of files to copy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707592#comment-14707592
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/283/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707588#comment-14707588
 ] 

Hudson commented on HDFS-8891:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/283/])
HDFS-8891. HDFS concat should keep srcs order. Contributed by Yong Zhang. 
(cdouglas: rev b0564c9f3c501bf7806f07649929038624dea10f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS concat should keep srcs order
 --

 Key: HDFS-8891
 URL: https://issues.apache.org/jira/browse/HDFS-8891
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yong Zhang
Assignee: Yong Zhang
Priority: Blocker
 Fix For: 2.7.2

 Attachments: HDFS-8891.001.patch, HDFS-8891.002.patch


 FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
 order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8922) Link the native_mini_dfs test library with libdl, since IBM Java requires it

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707586#comment-14707586
 ] 

Hudson commented on HDFS-8922:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/283/])
HDFS-8922. Link the native_mini_dfs test library with libdl, since IBM Java 
requires it (Ayappan via Colin P. McCabe) (cmccabe: rev 
7642f64c24961d2b4772591a0957e2699162a083)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt


 Link the native_mini_dfs test library with libdl, since IBM Java requires it
 

 Key: HDFS-8922
 URL: https://issues.apache.org/jira/browse/HDFS-8922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.1
 Environment: IBM Java RHEL7.1 
Reporter: Ayappan
Assignee: Ayappan
 Fix For: 2.8.0

 Attachments: HDFS-8922.patch


 Building hadoop-hdfs-project with -Pnative option using IBM Java fails with 
 the following error
 [exec] Linking C executable test_native_mini_dfs
  [exec] /usr/bin/cmake -E cmake_link_script 
 CMakeFiles/test_native_mini_dfs.dir/link.txt --verbose=1
  [exec] /usr/bin/cc   -g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fvisibility=hidden
 CMakeFiles/test_native_mini_dfs.dir/main/native/libhdfs/test_native_mini_dfs.c.o
   -o test_native_mini_dfs -rdynamic libnative_mini_dfs.a 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so -lpthread 
 -Wl,-rpath,/home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic
  [exec] make[2]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] make[1]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlopen'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlclose'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlerror'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlsym'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dladdr'
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [test_native_mini_dfs] Error 1
  [exec] make[1]: *** [CMakeFiles/test_native_mini_dfs.dir/all] Error 2
  [exec] make: *** [all] Error 2
 It seems like the IBM jvm requires libdl for linking in native_mini_dfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8863) The remaining space check in BlockPlacementPolicyDefault is flawed

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707590#comment-14707590
 ] 

Hudson commented on HDFS-8863:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/283/])
HDFS-8863. The remaining space check in BlockPlacementPolicyDefault is flawed. 
(Kihwal Lee via yliu) (yliu: rev 5e8fe8943718309b5e39a794360aebccae28b331)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


 The remaining space check in BlockPlacementPolicyDefault is flawed
 --

 Key: HDFS-8863
 URL: https://issues.apache.org/jira/browse/HDFS-8863
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
  Labels: 2.6.1-candidate
 Fix For: 2.7.2

 Attachments: HDFS-8863.patch, HDFS-8863.v2.patch, HDFS-8863.v3.patch


 The block placement policy calls 
 {{DatanodeDescriptor#getRemaining(StorageType to check whether the block 
 is going to fit. Since the method is adding up all remaining spaces, namenode 
 can allocate a new block on a full node. This causes pipeline construction 
 failure and {{abandonBlock}}. If the cluster is nearly full, the client might 
 hit this multiple times and the write can fail permanently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8809) HDFS fsck reports under construction blocks as CORRUPT

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707585#comment-14707585
 ] 

Hudson commented on HDFS-8809:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/283/])
HDFS-8809. HDFS fsck reports under construction blocks as CORRUPT. Contributed 
by Jing Zhao. (jing9: rev c8bca62718203a1dad9b70d164bdf10cc71b40cd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 HDFS fsck reports under construction blocks as CORRUPT
 

 Key: HDFS-8809
 URL: https://issues.apache.org/jira/browse/HDFS-8809
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
 Environment: Hadoop 2.7.1 and HBase 1.1.1, on SUSE11sp3 (other 
 Linuxes not tested, probably not platform-dependent).  This did NOT happen 
 with Hadoop 2.4 and HBase 0.98.
Reporter: Sudhir Prakash
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8809.000.patch


 Whenever HBase is running, the hdfs fsck /  reports four hbase-related 
 files in the path hbase/data/WALs/ as CORRUPT. Even after letting the 
 cluster sit idle for a couple hours, it is still in the corrupt state.  If 
 HBase is shut down, the problem goes away.  If HBase is then restarted, the 
 problem recurs.  This was observed with Hadoop 2.7.1 and HBase 1.1.1, and did 
 NOT happen with Hadoop 2.4 and HBase 0.98.
 {code}
 hades1:/var/opt/teradata/packages # su hdfs
 hdfs@hades1:/var/opt/teradata/packages hdfs fsck /
 Connecting to namenode via 
 http://hades1.labs.teradata.com:50070/fsck?ugi=hdfspath=%2F
 FSCK started by hdfs (auth:SIMPLE) from /39.0.8.2 for path / at Wed Jun 24 
 20:40:17 GMT 2015
 ...
 /apps/hbase/data/WALs/hades4.labs.teradata.com,16020,1435168292684/hades4.labs.teradata.com%2C16020%2C1435168292684.default.1435175500556:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466..meta.1435175562144.meta:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466.default.1435175498500:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades6.labs.teradata.com,16020,1435168292373/hades6.labs.teradata.com%2C16020%2C1435168292373.default.1435175500301:
  MISSING 1 blocks of total size 83 
 B..
 
 
 Status:
  CORRUPT
  Total size:723977553 B (Total open files size: 332 B)
  Total dirs:79
  Total files:   388
  Total symlinks:0 (Files currently being written: 5)
  Total blocks (validated):  387 (avg. block size 1870743 B) (Total open 
 file blocks (not validated): 4)
   
   UNDER MIN REPL'D BLOCKS:  4 (1.0335917 %)
   dfs.namenode.replication.min: 1
   CORRUPT FILES:4
   MISSING BLOCKS:   4
   MISSING SIZE: 332 B
   
  Minimally replicated blocks:   387 (100.0 %)
  Over-replicated blocks:0 (0.0 %)
  Under-replicated blocks:   0 (0.0 %)
  Mis-replicated blocks: 0 (0.0 %)
  Default replication factor:3
  Average block replication: 3.0
  Corrupt blocks:0
  Missing replicas:  0 (0.0 %)
  Number of data-nodes:  3
  Number of racks:   1
 FSCK ended at Wed Jun 24 20:40:17 GMT 2015 in 7 milliseconds
 The filesystem under path '/' is CORRUPT
 hdfs@hades1:/var/opt/teradata/packages
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7116) Add a metric to expose the bandwidth of balancer

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707525#comment-14707525
 ] 

Hadoop QA commented on HDFS-7116:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  23m 35s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 55s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 55s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 59s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 29s | The applied patch generated  3 
new checkstyle issues (total was 222, now 224). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 41s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 163m  6s | Tests passed in hadoop-hdfs. 
|
| | | 239m 36s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751690/HDFS-7116-05.patch |
| Optional Tests | site javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12070/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12070/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12070/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12070/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12070/console |


This message was automatically generated.

 Add a metric to expose the bandwidth of balancer
 

 Key: HDFS-7116
 URL: https://issues.apache.org/jira/browse/HDFS-7116
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Reporter: Akira AJISAKA
Assignee: Rakesh R
 Attachments: HDFS-7116-00.patch, HDFS-7116-01.patch, 
 HDFS-7116-02.patch, HDFS-7116-03.patch, HDFS-7116-04.patch, HDFS-7116-05.patch


 Now reading logs is the only way to check how the balancer bandwidth is set. 
 It would be useful for administrators if they can get the value of the same. 
 This jira to discuss  implement the way to access the balancer bandwidth 
 value of the datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8934) Move ShortCircuitShm to hdfs-client

2015-08-21 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8934:
-
Fix Version/s: (was: 2.8.0)

 Move ShortCircuitShm to hdfs-client
 ---

 Key: HDFS-8934
 URL: https://issues.apache.org/jira/browse/HDFS-8934
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Mingliang Liu
Assignee: Mingliang Liu
 Attachments: HDFS-8934.000.patch


 This jira tracks the effort of moving the {{ShortCircuitShm}} class into the 
 hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8934) Move ShortCircuitShm to hdfs-client

2015-08-21 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707549#comment-14707549
 ] 

Haohui Mai commented on HDFS-8934:
--

Thanks for the work, Mingliang! It looks good to me overall.

Some quick comments:

1. You can generate the patch using {{git diff -M}}.
2. The new patch should not introduce new trailing whitespace.

+1 after a clean Jenkins run and addressed the above comments.

Since this jira is mostly about moving the relevant classes to the 
{{hdfs-client}} package, it might be better to address the following comments 
might be  addressed in separated jiras:

1. No guards are required when calling {{LOG.debug()}} and {{LOG.trace()}} in 
slf4j.
2. Fixing the checkstyle error.

 Move ShortCircuitShm to hdfs-client
 ---

 Key: HDFS-8934
 URL: https://issues.apache.org/jira/browse/HDFS-8934
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Mingliang Liu
Assignee: Mingliang Liu
 Attachments: HDFS-8934.000.patch


 This jira tracks the effort of moving the {{ShortCircuitShm}} class into the 
 hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707596#comment-14707596
 ] 

Hadoop QA commented on HDFS-8924:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 21s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 46s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 33s | The applied patch generated  6 
new checkstyle issues (total was 40, now 46). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  8s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 163m 24s | Tests passed in hadoop-hdfs. 
|
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 213m 30s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751509/HDFS-8924.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12075/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12075/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12075/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12075/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12075/console |


This message was automatically generated.

 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8809) HDFS fsck reports under construction blocks as CORRUPT

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707601#comment-14707601
 ] 

Hudson commented on HDFS-8809:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2221/])
HDFS-8809. HDFS fsck reports under construction blocks as CORRUPT. Contributed 
by Jing Zhao. (jing9: rev c8bca62718203a1dad9b70d164bdf10cc71b40cd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS fsck reports under construction blocks as CORRUPT
 

 Key: HDFS-8809
 URL: https://issues.apache.org/jira/browse/HDFS-8809
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
 Environment: Hadoop 2.7.1 and HBase 1.1.1, on SUSE11sp3 (other 
 Linuxes not tested, probably not platform-dependent).  This did NOT happen 
 with Hadoop 2.4 and HBase 0.98.
Reporter: Sudhir Prakash
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8809.000.patch


 Whenever HBase is running, the hdfs fsck /  reports four hbase-related 
 files in the path hbase/data/WALs/ as CORRUPT. Even after letting the 
 cluster sit idle for a couple hours, it is still in the corrupt state.  If 
 HBase is shut down, the problem goes away.  If HBase is then restarted, the 
 problem recurs.  This was observed with Hadoop 2.7.1 and HBase 1.1.1, and did 
 NOT happen with Hadoop 2.4 and HBase 0.98.
 {code}
 hades1:/var/opt/teradata/packages # su hdfs
 hdfs@hades1:/var/opt/teradata/packages hdfs fsck /
 Connecting to namenode via 
 http://hades1.labs.teradata.com:50070/fsck?ugi=hdfspath=%2F
 FSCK started by hdfs (auth:SIMPLE) from /39.0.8.2 for path / at Wed Jun 24 
 20:40:17 GMT 2015
 ...
 /apps/hbase/data/WALs/hades4.labs.teradata.com,16020,1435168292684/hades4.labs.teradata.com%2C16020%2C1435168292684.default.1435175500556:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466..meta.1435175562144.meta:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466.default.1435175498500:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades6.labs.teradata.com,16020,1435168292373/hades6.labs.teradata.com%2C16020%2C1435168292373.default.1435175500301:
  MISSING 1 blocks of total size 83 
 B..
 
 
 Status:
  CORRUPT
  Total size:723977553 B (Total open files size: 332 B)
  Total dirs:79
  Total files:   388
  Total symlinks:0 (Files currently being written: 5)
  Total blocks (validated):  387 (avg. block size 1870743 B) (Total open 
 file blocks (not validated): 4)
   
   UNDER MIN REPL'D BLOCKS:  4 (1.0335917 %)
   dfs.namenode.replication.min: 1
   CORRUPT FILES:4
   MISSING BLOCKS:   4
   MISSING SIZE: 332 B
   
  Minimally replicated blocks:   387 (100.0 %)
  Over-replicated blocks:0 (0.0 %)
  Under-replicated blocks:   0 (0.0 %)
  Mis-replicated blocks: 0 (0.0 %)
  Default replication factor:3
  Average block replication: 3.0
  Corrupt blocks:0
  Missing replicas:  0 (0.0 %)
  Number of data-nodes:  3
  Number of racks:   1
 FSCK ended at Wed Jun 24 20:40:17 GMT 2015 in 7 milliseconds
 The filesystem under path '/' is CORRUPT
 hdfs@hades1:/var/opt/teradata/packages
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707608#comment-14707608
 ] 

Hudson commented on HDFS-8884:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2221/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8828) Utilize Snapshot diff report to build diff copy list in distcp

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707603#comment-14707603
 ] 

Hudson commented on HDFS-8828:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2221/])
HDFS-8828. Utilize Snapshot diff report to build diff copy list in distcp. 
(Yufei Gu via Yongjun Zhang) (yzhang: rev 
0bc15cb6e60dc60885234e01dec1c7cb4557a926)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DiffInfo.java
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java


 Utilize Snapshot diff report to build diff copy list in distcp
 --

 Key: HDFS-8828
 URL: https://issues.apache.org/jira/browse/HDFS-8828
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Yufei Gu
Assignee: Yufei Gu
 Fix For: 2.8.0

 Attachments: HDFS-8828.001.patch, HDFS-8828.002.patch, 
 HDFS-8828.003.patch, HDFS-8828.004.patch, HDFS-8828.005.patch, 
 HDFS-8828.006.patch, HDFS-8828.007.patch, HDFS-8828.008.patch, 
 HDFS-8828.009.patch, HDFS-8828.010.patch, HDFS-8828.011.patch


 Some users reported huge time cost to build file copy list in distcp. (30 
 hours for 1.6M files). We can leverage snapshot diff report to build file 
 copy list including files/dirs which are changes only between two snapshots 
 (or a snapshot and a normal dir). It speed up the process in two folds: 1. 
 less copy list building time. 2. less file copy MR jobs.
 HDFS snapshot diff report provide information about file/directory creation, 
 deletion, rename and modification between two snapshots or a snapshot and a 
 normal directory. HDFS-7535 synchronize deletion and rename, then fallback to 
 the default distcp. So it still relies on default distcp to building complete 
 list of files under the source dir. This patch only puts creation and 
 modification files into the copy list based on snapshot diff report. We can 
 minimize the number of files to copy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8922) Link the native_mini_dfs test library with libdl, since IBM Java requires it

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707602#comment-14707602
 ] 

Hudson commented on HDFS-8922:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2221/])
HDFS-8922. Link the native_mini_dfs test library with libdl, since IBM Java 
requires it (Ayappan via Colin P. McCabe) (cmccabe: rev 
7642f64c24961d2b4772591a0957e2699162a083)
* hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Link the native_mini_dfs test library with libdl, since IBM Java requires it
 

 Key: HDFS-8922
 URL: https://issues.apache.org/jira/browse/HDFS-8922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.1
 Environment: IBM Java RHEL7.1 
Reporter: Ayappan
Assignee: Ayappan
 Fix For: 2.8.0

 Attachments: HDFS-8922.patch


 Building hadoop-hdfs-project with -Pnative option using IBM Java fails with 
 the following error
 [exec] Linking C executable test_native_mini_dfs
  [exec] /usr/bin/cmake -E cmake_link_script 
 CMakeFiles/test_native_mini_dfs.dir/link.txt --verbose=1
  [exec] /usr/bin/cc   -g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fvisibility=hidden
 CMakeFiles/test_native_mini_dfs.dir/main/native/libhdfs/test_native_mini_dfs.c.o
   -o test_native_mini_dfs -rdynamic libnative_mini_dfs.a 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so -lpthread 
 -Wl,-rpath,/home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic
  [exec] make[2]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] make[1]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlopen'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlclose'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlerror'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlsym'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dladdr'
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [test_native_mini_dfs] Error 1
  [exec] make[1]: *** [CMakeFiles/test_native_mini_dfs.dir/all] Error 2
  [exec] make: *** [all] Error 2
 It seems like the IBM jvm requires libdl for linking in native_mini_dfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707604#comment-14707604
 ] 

Hudson commented on HDFS-8891:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2221/])
HDFS-8891. HDFS concat should keep srcs order. Contributed by Yong Zhang. 
(cdouglas: rev b0564c9f3c501bf7806f07649929038624dea10f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS concat should keep srcs order
 --

 Key: HDFS-8891
 URL: https://issues.apache.org/jira/browse/HDFS-8891
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yong Zhang
Assignee: Yong Zhang
Priority: Blocker
 Fix For: 2.7.2

 Attachments: HDFS-8891.001.patch, HDFS-8891.002.patch


 FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
 order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8863) The remaining space check in BlockPlacementPolicyDefault is flawed

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707606#comment-14707606
 ] 

Hudson commented on HDFS-8863:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2221/])
HDFS-8863. The remaining space check in BlockPlacementPolicyDefault is flawed. 
(Kihwal Lee via yliu) (yliu: rev 5e8fe8943718309b5e39a794360aebccae28b331)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 The remaining space check in BlockPlacementPolicyDefault is flawed
 --

 Key: HDFS-8863
 URL: https://issues.apache.org/jira/browse/HDFS-8863
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
  Labels: 2.6.1-candidate
 Fix For: 2.7.2

 Attachments: HDFS-8863.patch, HDFS-8863.v2.patch, HDFS-8863.v3.patch


 The block placement policy calls 
 {{DatanodeDescriptor#getRemaining(StorageType to check whether the block 
 is going to fit. Since the method is adding up all remaining spaces, namenode 
 can allocate a new block on a full node. This causes pipeline construction 
 failure and {{abandonBlock}}. If the cluster is nearly full, the client might 
 hit this multiple times and the write can fail permanently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8922) Link the native_mini_dfs test library with libdl, since IBM Java requires it

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707539#comment-14707539
 ] 

Hudson commented on HDFS-8922:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2240 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2240/])
HDFS-8922. Link the native_mini_dfs test library with libdl, since IBM Java 
requires it (Ayappan via Colin P. McCabe) (cmccabe: rev 
7642f64c24961d2b4772591a0957e2699162a083)
* hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Link the native_mini_dfs test library with libdl, since IBM Java requires it
 

 Key: HDFS-8922
 URL: https://issues.apache.org/jira/browse/HDFS-8922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.1
 Environment: IBM Java RHEL7.1 
Reporter: Ayappan
Assignee: Ayappan
 Fix For: 2.8.0

 Attachments: HDFS-8922.patch


 Building hadoop-hdfs-project with -Pnative option using IBM Java fails with 
 the following error
 [exec] Linking C executable test_native_mini_dfs
  [exec] /usr/bin/cmake -E cmake_link_script 
 CMakeFiles/test_native_mini_dfs.dir/link.txt --verbose=1
  [exec] /usr/bin/cc   -g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fvisibility=hidden
 CMakeFiles/test_native_mini_dfs.dir/main/native/libhdfs/test_native_mini_dfs.c.o
   -o test_native_mini_dfs -rdynamic libnative_mini_dfs.a 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so -lpthread 
 -Wl,-rpath,/home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic
  [exec] make[2]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] make[1]: Leaving directory 
 `/home/ayappan/hadoop_2.7.1_new/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlopen'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlclose'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlerror'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dlsym'
  [exec] 
 /home/ayappan/ibm-java-ppc64le-71/jre/lib/ppc64le/classic/libjvm.so: 
 undefined reference to `dladdr'
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [test_native_mini_dfs] Error 1
  [exec] make[1]: *** [CMakeFiles/test_native_mini_dfs.dir/all] Error 2
  [exec] make: *** [all] Error 2
 It seems like the IBM jvm requires libdl for linking in native_mini_dfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707541#comment-14707541
 ] 

Hudson commented on HDFS-8891:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2240 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2240/])
HDFS-8891. HDFS concat should keep srcs order. Contributed by Yong Zhang. 
(cdouglas: rev b0564c9f3c501bf7806f07649929038624dea10f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HDFS concat should keep srcs order
 --

 Key: HDFS-8891
 URL: https://issues.apache.org/jira/browse/HDFS-8891
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yong Zhang
Assignee: Yong Zhang
Priority: Blocker
 Fix For: 2.7.2

 Attachments: HDFS-8891.001.patch, HDFS-8891.002.patch


 FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
 order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8863) The remaining space check in BlockPlacementPolicyDefault is flawed

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707543#comment-14707543
 ] 

Hudson commented on HDFS-8863:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2240 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2240/])
HDFS-8863. The remaining space check in BlockPlacementPolicyDefault is flawed. 
(Kihwal Lee via yliu) (yliu: rev 5e8fe8943718309b5e39a794360aebccae28b331)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 The remaining space check in BlockPlacementPolicyDefault is flawed
 --

 Key: HDFS-8863
 URL: https://issues.apache.org/jira/browse/HDFS-8863
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
  Labels: 2.6.1-candidate
 Fix For: 2.7.2

 Attachments: HDFS-8863.patch, HDFS-8863.v2.patch, HDFS-8863.v3.patch


 The block placement policy calls 
 {{DatanodeDescriptor#getRemaining(StorageType to check whether the block 
 is going to fit. Since the method is adding up all remaining spaces, namenode 
 can allocate a new block on a full node. This causes pipeline construction 
 failure and {{abandonBlock}}. If the cluster is nearly full, the client might 
 hit this multiple times and the write can fail permanently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8828) Utilize Snapshot diff report to build diff copy list in distcp

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707540#comment-14707540
 ] 

Hudson commented on HDFS-8828:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2240 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2240/])
HDFS-8828. Utilize Snapshot diff report to build diff copy list in distcp. 
(Yufei Gu via Yongjun Zhang) (yzhang: rev 
0bc15cb6e60dc60885234e01dec1c7cb4557a926)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DiffInfo.java
* 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java


 Utilize Snapshot diff report to build diff copy list in distcp
 --

 Key: HDFS-8828
 URL: https://issues.apache.org/jira/browse/HDFS-8828
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, snapshots
Reporter: Yufei Gu
Assignee: Yufei Gu
 Fix For: 2.8.0

 Attachments: HDFS-8828.001.patch, HDFS-8828.002.patch, 
 HDFS-8828.003.patch, HDFS-8828.004.patch, HDFS-8828.005.patch, 
 HDFS-8828.006.patch, HDFS-8828.007.patch, HDFS-8828.008.patch, 
 HDFS-8828.009.patch, HDFS-8828.010.patch, HDFS-8828.011.patch


 Some users reported huge time cost to build file copy list in distcp. (30 
 hours for 1.6M files). We can leverage snapshot diff report to build file 
 copy list including files/dirs which are changes only between two snapshots 
 (or a snapshot and a normal dir). It speed up the process in two folds: 1. 
 less copy list building time. 2. less file copy MR jobs.
 HDFS snapshot diff report provide information about file/directory creation, 
 deletion, rename and modification between two snapshots or a snapshot and a 
 normal directory. HDFS-7535 synchronize deletion and rename, then fallback to 
 the default distcp. So it still relies on default distcp to building complete 
 list of files under the source dir. This patch only puts creation and 
 modification files into the copy list based on snapshot diff report. We can 
 minimize the number of files to copy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8809) HDFS fsck reports under construction blocks as CORRUPT

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707538#comment-14707538
 ] 

Hudson commented on HDFS-8809:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2240 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2240/])
HDFS-8809. HDFS fsck reports under construction blocks as CORRUPT. Contributed 
by Jing Zhao. (jing9: rev c8bca62718203a1dad9b70d164bdf10cc71b40cd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 HDFS fsck reports under construction blocks as CORRUPT
 

 Key: HDFS-8809
 URL: https://issues.apache.org/jira/browse/HDFS-8809
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
 Environment: Hadoop 2.7.1 and HBase 1.1.1, on SUSE11sp3 (other 
 Linuxes not tested, probably not platform-dependent).  This did NOT happen 
 with Hadoop 2.4 and HBase 0.98.
Reporter: Sudhir Prakash
Assignee: Jing Zhao
 Fix For: 2.8.0

 Attachments: HDFS-8809.000.patch


 Whenever HBase is running, the hdfs fsck /  reports four hbase-related 
 files in the path hbase/data/WALs/ as CORRUPT. Even after letting the 
 cluster sit idle for a couple hours, it is still in the corrupt state.  If 
 HBase is shut down, the problem goes away.  If HBase is then restarted, the 
 problem recurs.  This was observed with Hadoop 2.7.1 and HBase 1.1.1, and did 
 NOT happen with Hadoop 2.4 and HBase 0.98.
 {code}
 hades1:/var/opt/teradata/packages # su hdfs
 hdfs@hades1:/var/opt/teradata/packages hdfs fsck /
 Connecting to namenode via 
 http://hades1.labs.teradata.com:50070/fsck?ugi=hdfspath=%2F
 FSCK started by hdfs (auth:SIMPLE) from /39.0.8.2 for path / at Wed Jun 24 
 20:40:17 GMT 2015
 ...
 /apps/hbase/data/WALs/hades4.labs.teradata.com,16020,1435168292684/hades4.labs.teradata.com%2C16020%2C1435168292684.default.1435175500556:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466..meta.1435175562144.meta:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466.default.1435175498500:
  MISSING 1 blocks of total size 83 B.
 /apps/hbase/data/WALs/hades6.labs.teradata.com,16020,1435168292373/hades6.labs.teradata.com%2C16020%2C1435168292373.default.1435175500301:
  MISSING 1 blocks of total size 83 
 B..
 
 
 Status:
  CORRUPT
  Total size:723977553 B (Total open files size: 332 B)
  Total dirs:79
  Total files:   388
  Total symlinks:0 (Files currently being written: 5)
  Total blocks (validated):  387 (avg. block size 1870743 B) (Total open 
 file blocks (not validated): 4)
   
   UNDER MIN REPL'D BLOCKS:  4 (1.0335917 %)
   dfs.namenode.replication.min: 1
   CORRUPT FILES:4
   MISSING BLOCKS:   4
   MISSING SIZE: 332 B
   
  Minimally replicated blocks:   387 (100.0 %)
  Over-replicated blocks:0 (0.0 %)
  Under-replicated blocks:   0 (0.0 %)
  Mis-replicated blocks: 0 (0.0 %)
  Default replication factor:3
  Average block replication: 3.0
  Corrupt blocks:0
  Missing replicas:  0 (0.0 %)
  Number of data-nodes:  3
  Number of racks:   1
 FSCK ended at Wed Jun 24 20:40:17 GMT 2015 in 7 milliseconds
 The filesystem under path '/' is CORRUPT
 hdfs@hades1:/var/opt/teradata/packages
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8838) Tolerate datanode failures in DFSStripedOutputStream when the data length is small

2015-08-21 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8838:

Attachment: HDFS-8838-HDFS-7285-20150821.patch

Upload a rebased patch for Nicholas.

 Tolerate datanode failures in DFSStripedOutputStream when the data length is 
 small
 --

 Key: HDFS-8838
 URL: https://issues.apache.org/jira/browse/HDFS-8838
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: HDFS-8838-HDFS-7285-000.patch, 
 HDFS-8838-HDFS-7285-20150809-test.patch, HDFS-8838-HDFS-7285-20150809.patch, 
 HDFS-8838-HDFS-7285-20150821.patch, h8838_20150729.patch, 
 h8838_20150731-HDFS-7285.patch, h8838_20150731.log, h8838_20150731.patch, 
 h8838_20150804-HDFS-7285.patch, h8838_20150809.patch


 Currently, DFSStripedOutputStream cannot tolerate datanode failures when the 
 data length is small.  We fix the bugs here and add more tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8388) Time and Date format need to be in sync in Namenode UI page

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707565#comment-14707565
 ] 

Hadoop QA commented on HDFS-8388:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  22m 46s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 48s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  1s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 31s | The applied patch generated  3 
new checkstyle issues (total was 303, now 305). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 40s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 22s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 21s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 177m  7s | Tests failed in hadoop-hdfs. |
| | | 252m 29s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | org.apache.hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751718/HDFS-8388-005.patch |
| Optional Tests | site javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12072/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12072/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12072/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12072/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12072/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12072/console |


This message was automatically generated.

 Time and Date format need to be in sync in Namenode UI page
 ---

 Key: HDFS-8388
 URL: https://issues.apache.org/jira/browse/HDFS-8388
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore
Priority: Minor
 Attachments: HDFS-8388-002.patch, HDFS-8388-003.patch, 
 HDFS-8388-004.patch, HDFS-8388-005.patch, HDFS-8388.patch, HDFS-8388_1.patch, 
 ScreenShot-InvalidDate.png


 In NameNode UI Page, Date and Time FORMAT  displayed on the page are not in 
 sync currently.
 Started:Wed May 13 12:28:02 IST 2015
 Compiled:23 Apr 2015 12:22:59 
 Block Deletion Start Time   13 May 2015 12:28:02
 We can keep a common format in all the above places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) DataNode sets SO_RCVBUF explicitly is disabling tcp auto-tuning

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707520#comment-14707520
 ] 

Hadoop QA commented on HDFS-8829:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 40s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m  7s | The applied patch generated  6 
new checkstyle issues (total was 762, now 766). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 22s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 163m 11s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 213m 59s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751691/HDFS-8829.0002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12074/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12074/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12074/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12074/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12074/console |


This message was automatically generated.

 DataNode sets SO_RCVBUF explicitly is disabling tcp auto-tuning
 ---

 Key: HDFS-8829
 URL: https://issues.apache.org/jira/browse/HDFS-8829
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.3.0, 2.6.0
Reporter: He Tianyi
Assignee: He Tianyi
 Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch


 {code:java}
   private void initDataXceiver(Configuration conf) throws IOException {
 // find free port or use privileged port provided
 TcpPeerServer tcpPeerServer;
 if (secureResources != null) {
   tcpPeerServer = new TcpPeerServer(secureResources);
 } else {
   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
   DataNode.getStreamingAddr(conf));
 }
 
 tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
 {code}
 The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
 some system.
 Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8930) Block report lease may leak if the 2nd full block report comes when NN is still in safemode

2015-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707519#comment-14707519
 ] 

Hadoop QA commented on HDFS-8930:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 16s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 28s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 38s | The applied patch generated  1 
new checkstyle issues (total was 196, now 196). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m  0s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 41s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 45s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  69m  6s | Tests failed in hadoop-hdfs. |
| | | 121m 56s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | org.apache.hadoop.hdfs.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751611/HDFS-8930.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22de7c1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12076/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12076/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12076/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12076/console |


This message was automatically generated.

 Block report lease may leak if the 2nd full block report comes when NN is 
 still in safemode
 ---

 Key: HDFS-8930
 URL: https://issues.apache.org/jira/browse/HDFS-8930
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HDFS-8930.000.patch


 This is a rare scenario in practice. If the NameNode is still in startup 
 SafeMode while a DataNode sends it the 2nd FBR, the NameNode assigns the 
 lease but rejects the report. The lease then leaves in NN until it expires.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8938) Refactor BlockManager in blockmanagement

2015-08-21 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8938:

Attachment: HDFS-8938.000.patch

The v0 version patch:
* Moves the inner classes {{BlockToMarkCorrupt}} and {{ReplicationWork}} to 
separate files in the same package
* Extract code sections to schedule replication and validate replication work 
in method {{computeReplicationWorkForBlocks}} to respective helper methods

 Refactor BlockManager in blockmanagement
 

 Key: HDFS-8938
 URL: https://issues.apache.org/jira/browse/HDFS-8938
 Project: Hadoop HDFS
  Issue Type: Task
  Components: build
Reporter: Mingliang Liu
Assignee: Mingliang Liu
 Attachments: HDFS-8938.000.patch


 This lira tracks the effort of refactoring the {{BlockManager}} in 
 {{hdfs.server.blockmanagement}} package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8846) Create edit log files with old layout version for upgrade testing

2015-08-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8846:

Attachment: HDFS-8846.03.patch

Thanks Colin for the suggestion! Updating the patch with more comprehensive 
checking of returned edits.

 Create edit log files with old layout version for upgrade testing
 -

 Key: HDFS-8846
 URL: https://issues.apache.org/jira/browse/HDFS-8846
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.1
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8846.00.patch, HDFS-8846.01.patch, 
 HDFS-8846.02.patch, HDFS-8846.03.patch


 Per discussion under HDFS-8480, we should create some edit log files with old 
 layout version, to test whether they can be correctly handled in upgrades.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8939:
--
Status: Patch Available  (was: Open)

 Test(S)WebHdfsFileContextMainOperations failing on branch-2
 ---

 Key: HDFS-8939
 URL: https://issues.apache.org/jira/browse/HDFS-8939
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.8.0
Reporter: Jakob Homan
 Fix For: 2.8.0

 Attachments: HDFS-8939-branch-2.001.patch


 After HDFS-8180, TestWebHdfsFileContextMainOperations and 
 TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
 instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
 trying to access a conf that was never provided.  In the constructor both 
 both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
 instantiated in the constructor and never have a chance to have their 
 {{setConf}} methods called:
 {code}  SWebHdfs(URI theUri, Configuration conf)
   throws IOException, URISyntaxException {
 super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
   }r{code}
 The test passes on trunk because HDFS-5321 removed the call to the 
 Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
 to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
 how branch-2 versus trunk handles default values (branch-2 pulls them from 
 configs if specified, trunk just returns the hard-coded value from the 
 constants file).
 I've fixed this behave like trunk and return just the hard-coded value, which 
 causes the test to pass.
   There is no WebHdfsFileSystem that takes a Config, which would be another 
 way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-21 Thread Jakob Homan (JIRA)
Jakob Homan created HDFS-8939:
-

 Summary: Test(S)WebHdfsFileContextMainOperations failing on 
branch-2
 Key: HDFS-8939
 URL: https://issues.apache.org/jira/browse/HDFS-8939
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.8.0
Reporter: Jakob Homan
 Fix For: 2.8.0


After HDFS-8180, TestWebHdfsFileContextMainOperations and 
TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
trying to access a conf that was never provided.  In the constructor both both 
WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are instantiated in 
the constructor and never have a chance to have their {{setConf}} methods 
called:
{code}  SWebHdfs(URI theUri, Configuration conf)
  throws IOException, URISyntaxException {
super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
  }r{code}

The test passes on trunk because HDFS-5321 removed the call to the 
Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied to 
branch-2 but reverted in HDFS-6632, so there's a bit of a difference in how 
branch-2 versus trunk handles default values (branch-2 pulls them from configs 
if specified, trunk just returns the hard-coded value from the constants file).

I've fixed this behave like trunk and return just the hard-coded value, which 
causes the test to pass.

  There is no WebHdfsFileSystem that takes a Config, which would be another way 
to fix this.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707765#comment-14707765
 ] 

Hudson commented on HDFS-8924:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #296 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/296/])
HDFS-8924. Add pluggable interface for reading replicas in DFSClient. (Colin 
Patrick McCabe via Lei Xu) (lei: rev 7087e700e032dabc174ecc12b62c12e7d49b995f)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessorBuilder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java


 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.8.0

 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8939:
--
Attachment: HDFS-8939-branch-2.001.patch

Match getDefaultPort behavior on trunk.

 Test(S)WebHdfsFileContextMainOperations failing on branch-2
 ---

 Key: HDFS-8939
 URL: https://issues.apache.org/jira/browse/HDFS-8939
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.8.0
Reporter: Jakob Homan
 Fix For: 2.8.0

 Attachments: HDFS-8939-branch-2.001.patch


 After HDFS-8180, TestWebHdfsFileContextMainOperations and 
 TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
 instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
 trying to access a conf that was never provided.  In the constructor both 
 both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
 instantiated in the constructor and never have a chance to have their 
 {{setConf}} methods called:
 {code}  SWebHdfs(URI theUri, Configuration conf)
   throws IOException, URISyntaxException {
 super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
   }r{code}
 The test passes on trunk because HDFS-5321 removed the call to the 
 Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
 to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
 how branch-2 versus trunk handles default values (branch-2 pulls them from 
 configs if specified, trunk just returns the hard-coded value from the 
 constants file).
 I've fixed this behave like trunk and return just the hard-coded value, which 
 causes the test to pass.
   There is no WebHdfsFileSystem that takes a Config, which would be another 
 way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707774#comment-14707774
 ] 

Hudson commented on HDFS-8924:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2241 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2241/])
HDFS-8924. Add pluggable interface for reading replicas in DFSClient. (Colin 
Patrick McCabe via Lei Xu) (lei: rev 7087e700e032dabc174ecc12b62c12e7d49b995f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessorBuilder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessor.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java


 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.8.0

 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707663#comment-14707663
 ] 

Lei (Eddy) Xu commented on HDFS-8924:
-

+1. Will commit shortly.

 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707690#comment-14707690
 ] 

Hudson commented on HDFS-8924:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8336 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8336/])
HDFS-8924. Add pluggable interface for reading replicas in DFSClient. (Colin 
Patrick McCabe via Lei Xu) (lei: rev 7087e700e032dabc174ecc12b62c12e7d49b995f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessorBuilder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessor.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto


 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.8.0

 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8938) Refactor BlockManager in blockmanagement

2015-08-21 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-8938:
---

 Summary: Refactor BlockManager in blockmanagement
 Key: HDFS-8938
 URL: https://issues.apache.org/jira/browse/HDFS-8938
 Project: Hadoop HDFS
  Issue Type: Task
  Components: build
Reporter: Mingliang Liu
Assignee: Mingliang Liu


This lira tracks the effort of refactoring the {{BlockManager}} in 
{{hdfs.server.blockmanagement}} package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-8939:
-

Assignee: Jakob Homan

 Test(S)WebHdfsFileContextMainOperations failing on branch-2
 ---

 Key: HDFS-8939
 URL: https://issues.apache.org/jira/browse/HDFS-8939
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.8.0
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 2.8.0

 Attachments: HDFS-8939-branch-2.001.patch


 After HDFS-8180, TestWebHdfsFileContextMainOperations and 
 TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
 instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
 trying to access a conf that was never provided.  In the constructor both 
 both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
 instantiated in the constructor and never have a chance to have their 
 {{setConf}} methods called:
 {code}  SWebHdfs(URI theUri, Configuration conf)
   throws IOException, URISyntaxException {
 super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
   }r{code}
 The test passes on trunk because HDFS-5321 removed the call to the 
 Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
 to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
 how branch-2 versus trunk handles default values (branch-2 pulls them from 
 configs if specified, trunk just returns the hard-coded value from the 
 constants file).
 I've fixed this behave like trunk and return just the hard-coded value, which 
 causes the test to pass.
   There is no WebHdfsFileSystem that takes a Config, which would be another 
 way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8924:

   Resolution: Fixed
Fix Version/s: 2.8.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks a lot for working on this, [~cmccabe]. 

I committed this patch to {{trunk}} and {{branch-2}}.

 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.8.0

 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8924) Add pluggable interface for reading replicas in DFSClient

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707722#comment-14707722
 ] 

Hudson commented on HDFS-8924:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #292 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/292/])
HDFS-8924. Add pluggable interface for reading replicas in DFSClient. (Colin 
Patrick McCabe via Lei Xu) (lei: rev 7087e700e032dabc174ecc12b62c12e7d49b995f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestExternalBlockReader.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ReplicaAccessorBuilder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExternalBlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java


 Add pluggable interface for reading replicas in DFSClient
 -

 Key: HDFS-8924
 URL: https://issues.apache.org/jira/browse/HDFS-8924
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.8.0

 Attachments: HDFS-8924.001.patch, HDFS-8924.002.patch


 We should add a pluggable interface for reading replicas in the DFSClient.  
 This could be used to implement short-circuit reads on systems without file 
 descriptors, or for other optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >