[jira] [Updated] (HDFS-9768) Reuse objectMapper instance in HDFS to improve the performance

2016-02-12 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9768:

Attachment: HDFS-9768.003.patch

Thanks [~ajisakaa] for review. Update the latest patch.

> Reuse objectMapper instance in HDFS to improve the performance
> --
>
> Key: HDFS-9768
> URL: https://issues.apache.org/jira/browse/HDFS-9768
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9768.001.patch, HDFS-9768.002.patch, 
> HDFS-9768.003.patch
>
>
> In HDFS-9724, it have reused the ObjectMapper instance to improve the 
> performance in {{WebHDFS}}. But in other places, it still using ObjectMapper 
> by {{new ObjectMapper()}}. We probably need a comprehensive review across the 
> whole codebase to look for this pattern and modify them. In MAPREDUCE-6626, 
> YARN-4668, these jiras also work for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9794) Streamer threads may leak if failure happens when closing the striped outputstream

2016-02-12 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9794:
--
Hadoop Flags: Reviewed
 Component/s: hdfs-client

+1 patch looks good.

> Streamer threads may leak if failure happens when closing the striped 
> outputstream
> --
>
> Key: HDFS-9794
> URL: https://issues.apache.org/jira/browse/HDFS-9794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Namit Maheshwari
>Assignee: Jing Zhao
>Priority: Critical
> Attachments: HDFS-9794.000.patch, HDFS-9794.001.patch
>
>
> When closing the DFSStripedOutputStream, if failures happen while flushing 
> out the data/parity blocks, the streamer threads will not be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9754) Avoid unnecessary getBlockCollection calls in BlockManager

2016-02-12 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9754:
--
Hadoop Flags: Reviewed

+1 the new patch looks good!

> Avoid unnecessary getBlockCollection calls in BlockManager
> --
>
> Key: HDFS-9754
> URL: https://issues.apache.org/jira/browse/HDFS-9754
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9754.000.patch, HDFS-9754.001.patch, 
> HDFS-9754.002.patch
>
>
> Currently BlockManager calls {{Namesystem#getBlockCollection}} in order to:
> 1. check if the block has already been abandoned
> 2. identify the storage policy of the block
> 3. meta save
> For #1 we can use BlockInfo's internal state instead of checking if the 
> corresponding file still exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9787) SNNs stop uploading FSImage to ANN once isPrimaryCheckPointer changed to false.

2016-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144349#comment-15144349
 ] 

Hadoop QA commented on HDFS-9787:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 26 unchanged - 1 fixed = 26 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 54s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 12s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
| JDK v1.7.0_95 Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787633/HDFS-9787-v004.patch |
| JIRA Issue | HDFS-9787 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4ad408d30f95 3.13.0-36-lowlatency #63-Ubuntu SMP

[jira] [Commented] (HDFS-9768) Reuse objectMapper instance in HDFS to improve the performance

2016-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144413#comment-15144413
 ] 

Hadoop QA commented on HDFS-9768:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {c

[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144513#comment-15144513
 ] 

Larry McCay commented on HDFS-9711:
---

I am much more inclined to try and make v004 work than go back to v003.

What do you think about going with option #2 and also pulling the 
handleHttpInteraction out into a CsrfUtils class.
This makes it less odd that it is all encapsulated in the same impl and a 
little more clear that the handler is used by multiple classes.

Perhaps CsrfUtils.handleRestHttpInteraction(HttpInteraction interation) with 
the anticipation that a Csrf.handleWebAppHttpInteraction(HttpInteraction 
interation)?

The webapp one would have to be able to compare a session value of the header 
to the actual value sent by the client - which would be a new constructor 
argument on ServletFilterHttpInteraction/NettyHttpInteraction.

We could also just overload the method with the additional parameter of the 
value to check against and leave it as handleHttpInteraction(HttpInteraction 
interation, String nonce)

Anyway, I think that some simple separation with a Utils class would help make 
it more readable as well.

> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch, HDFS-9711.004.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7452) Can we skip getCorruptFiles() call for standby NameNode..?

2016-02-12 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7452:
---
Status: Patch Available  (was: Open)

> Can we skip getCorruptFiles() call for standby NameNode..?
> --
>
> Key: HDFS-7452
> URL: https://issues.apache.org/jira/browse/HDFS-7452
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Trivial
> Attachments: HDFS-7452.patch
>
>
> Seen following warns logs from StandBy Namenode logs ..
> {noformat}
> 2014-11-27 17:50:32,497 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:42,557 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:52,617 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,117 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:02,678 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:12,738 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:22,798 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,119 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> {noformat}
> do we need to call for SNN..? I feel, it might not be required.can we 
> maintain state wide..Please let me know, If I am wrong..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7452) Can we skip getCorruptFiles() call for standby NameNode..?

2016-02-12 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7452:
---
Attachment: HDFS-7452.patch

> Can we skip getCorruptFiles() call for standby NameNode..?
> --
>
> Key: HDFS-7452
> URL: https://issues.apache.org/jira/browse/HDFS-7452
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Trivial
> Attachments: HDFS-7452.patch
>
>
> Seen following warns logs from StandBy Namenode logs ..
> {noformat}
> 2014-11-27 17:50:32,497 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:42,557 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:52,617 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,117 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:02,678 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:12,738 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:22,798 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,119 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> {noformat}
> do we need to call for SNN..? I feel, it might not be required.can we 
> maintain state wide..Please let me know, If I am wrong..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7452) Can we skip getCorruptFiles() call for standby NameNode..?

2016-02-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144728#comment-15144728
 ] 

Brahma Reddy Battula commented on HDFS-7452:


uploaded the patch to eliminate spam in standbynamenode..Kindly review..

> Can we skip getCorruptFiles() call for standby NameNode..?
> --
>
> Key: HDFS-7452
> URL: https://issues.apache.org/jira/browse/HDFS-7452
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Trivial
> Attachments: HDFS-7452.patch
>
>
> Seen following warns logs from StandBy Namenode logs ..
> {noformat}
> 2014-11-27 17:50:32,497 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:42,557 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:50:52,617 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:00,117 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:02,678 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:12,738 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:22,798 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,058 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> 2014-11-27 17:51:30,119 | WARN  | 512264920@qtp-429668078-606 | Get corrupt 
> file blocks returned error: Operation category READ is not supported in state 
> standby | 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getCorruptFiles(FSNamesystem.java:6916)
> {noformat}
> do we need to call for SNN..? I feel, it might not be required.can we 
> maintain state wide..Please let me know, If I am wrong..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9700) DFSClient and DFSOutputStream do not respect TCP_NODELAY config in two spots

2016-02-12 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144739#comment-15144739
 ] 

Masatake Iwasaki commented on HDFS-9700:


+1. I will commit this to branch-2.8 and above if there is no further comment. 
Thanks for the update, [~ghelmling].

> DFSClient and DFSOutputStream do not respect TCP_NODELAY config in two spots
> 
>
> Key: HDFS-9700
> URL: https://issues.apache.org/jira/browse/HDFS-9700
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1, 2.6.3
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: HDFS-9700-branch-2.7.002.patch, 
> HDFS-9700-branch-2.7.003.patch, HDFS-9700-v1.patch, HDFS-9700-v2.patch, 
> HDFS-9700.002.patch, HDFS-9700.003.patch, HDFS-9700.004.patch, 
> HDFS-9700_branch-2.7-v2.patch, HDFS-9700_branch-2.7.patch
>
>
> In {{DFSClient.connectToDN()}} and 
> {{DFSOutputStream.createSocketForPipeline()}}, we never call 
> {{setTcpNoDelay()}} on the constructed socket before sending.  In both cases, 
> we should respect the value of ipc.client.tcpnodelay in the configuration.
> While this applies whether security is enabled or not, it seems to have a 
> bigger impact on latency when security is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9700) DFSClient and DFSOutputStream do not respect TCP_NODELAY config in two spots

2016-02-12 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144743#comment-15144743
 ] 

Masatake Iwasaki commented on HDFS-9700:


I would like to update title and type of the issue because this is improvement 
of DataTransferProtocol rather than bug fix of IPC.

> DFSClient and DFSOutputStream do not respect TCP_NODELAY config in two spots
> 
>
> Key: HDFS-9700
> URL: https://issues.apache.org/jira/browse/HDFS-9700
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1, 2.6.3
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: HDFS-9700-branch-2.7.002.patch, 
> HDFS-9700-branch-2.7.003.patch, HDFS-9700-v1.patch, HDFS-9700-v2.patch, 
> HDFS-9700.002.patch, HDFS-9700.003.patch, HDFS-9700.004.patch, 
> HDFS-9700_branch-2.7-v2.patch, HDFS-9700_branch-2.7.patch
>
>
> In {{DFSClient.connectToDN()}} and 
> {{DFSOutputStream.createSocketForPipeline()}}, we never call 
> {{setTcpNoDelay()}} on the constructed socket before sending.  In both cases, 
> we should respect the value of ipc.client.tcpnodelay in the configuration.
> While this applies whether security is enabled or not, it seems to have a 
> bigger impact on latency when security is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9700) DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for DataTransferProtocol

2016-02-12 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9700:
---
Summary: DFSClient and DFSOutputStream should set TCP_NODELAY on sockets 
for DataTransferProtocol  (was: DFSClient and DFSOutputStream do not respect 
TCP_NODELAY config in two spots)

> DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for 
> DataTransferProtocol
> 
>
> Key: HDFS-9700
> URL: https://issues.apache.org/jira/browse/HDFS-9700
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1, 2.6.3
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: HDFS-9700-branch-2.7.002.patch, 
> HDFS-9700-branch-2.7.003.patch, HDFS-9700-v1.patch, HDFS-9700-v2.patch, 
> HDFS-9700.002.patch, HDFS-9700.003.patch, HDFS-9700.004.patch, 
> HDFS-9700_branch-2.7-v2.patch, HDFS-9700_branch-2.7.patch
>
>
> In {{DFSClient.connectToDN()}} and 
> {{DFSOutputStream.createSocketForPipeline()}}, we never call 
> {{setTcpNoDelay()}} on the constructed socket before sending.  In both cases, 
> we should respect the value of ipc.client.tcpnodelay in the configuration.
> While this applies whether security is enabled or not, it seems to have a 
> bigger impact on latency when security is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9700) DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for DataTransferProtocol

2016-02-12 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9700:
---
Issue Type: Improvement  (was: Bug)

> DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for 
> DataTransferProtocol
> 
>
> Key: HDFS-9700
> URL: https://issues.apache.org/jira/browse/HDFS-9700
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1, 2.6.3
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: HDFS-9700-branch-2.7.002.patch, 
> HDFS-9700-branch-2.7.003.patch, HDFS-9700-v1.patch, HDFS-9700-v2.patch, 
> HDFS-9700.002.patch, HDFS-9700.003.patch, HDFS-9700.004.patch, 
> HDFS-9700_branch-2.7-v2.patch, HDFS-9700_branch-2.7.patch
>
>
> In {{DFSClient.connectToDN()}} and 
> {{DFSOutputStream.createSocketForPipeline()}}, we never call 
> {{setTcpNoDelay()}} on the constructed socket before sending.  In both cases, 
> we should respect the value of ipc.client.tcpnodelay in the configuration.
> While this applies whether security is enabled or not, it seems to have a 
> bigger impact on latency when security is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9425) Expose number of blocks per volume as a metric

2016-02-12 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9425:
---
Attachment: HDFS-9425-004.patch

> Expose number of blocks per volume as a metric
> --
>
> Key: HDFS-9425
> URL: https://issues.apache.org/jira/browse/HDFS-9425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9425-002.patch, HDFS-9425-003.patch, 
> HDFS-9425-004.patch, HDFS-9425.patch
>
>
> It will be helpful for user to know the usage in number of blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9425) Expose number of blocks per volume as a metric

2016-02-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144763#comment-15144763
 ] 

Brahma Reddy Battula commented on HDFS-9425:


[~vinayrpet] thanks for reviewing . Uploaded the patch to address the above 
comments..Kindly review..

> Expose number of blocks per volume as a metric
> --
>
> Key: HDFS-9425
> URL: https://issues.apache.org/jira/browse/HDFS-9425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9425-002.patch, HDFS-9425-003.patch, 
> HDFS-9425-004.patch, HDFS-9425.patch
>
>
> It will be helpful for user to know the usage in number of blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9798) TestHdfsNativeCodeLoader fails

2016-02-12 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-9798:
---

 Summary: TestHdfsNativeCodeLoader fails
 Key: HDFS-9798
 URL: https://issues.apache.org/jira/browse/HDFS-9798
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA


TestHdfsNativeCodeLoader fails intermittently in Jenkins.
* 
https://builds.apache.org/job/PreCommit-HDFS-Build/14473/testReport/org.apache.hadoop.fs/TestHdfsNativeCodeLoader/testNativeCodeLoaded/
* 
https://builds.apache.org/job/PreCommit-HDFS-Build/14475/testReport/org.apache.hadoop.fs/TestHdfsNativeCodeLoader/testNativeCodeLoaded/


Error message
{noformat}
TestNativeCodeLoader: libhadoop.so testing was required, but libhadoop.so was 
not loaded.  LD_LIBRARY_PATH = 
${env.LD_LIBRARY_PATH}:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
{noformat}

Stacktrace
{noformat}
java.lang.AssertionError: TestNativeCodeLoader: libhadoop.so testing was 
required, but libhadoop.so was not loaded.  LD_LIBRARY_PATH = 
${env.LD_LIBRARY_PATH}:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.fs.TestHdfsNativeCodeLoader.testNativeCodeLoaded(TestHdfsNativeCodeLoader.java:46)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9773) Remove dead code related to SimulatedFSDataset in tests

2016-02-12 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9773:
---
Attachment: HDFS-9773-003.patch

> Remove dead code related to SimulatedFSDataset in tests
> ---
>
> Key: HDFS-9773
> URL: https://issues.apache.org/jira/browse/HDFS-9773
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-9773-002.patch, HDFS-9773-003.patch, HDFS-9773.patch
>
>
> There are some dead code
> {code}
>   final boolean simulatedStorage = false;
>   if (simulatedStorage) {
> SimulatedFSDataset.setFactory(conf);
>   }
> {code}
> in TestShortCircuitLocalRead, TestFileAppend, TestFileAppend2, 
> TestFileAppend4, and TestLargeBlock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9773) Remove dead code related to SimulatedFSDataset in tests

2016-02-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144798#comment-15144798
 ] 

Brahma Reddy Battula commented on HDFS-9773:


[~ajisakaa] thanks for review.uploaded the patch.kindly review..

> Remove dead code related to SimulatedFSDataset in tests
> ---
>
> Key: HDFS-9773
> URL: https://issues.apache.org/jira/browse/HDFS-9773
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-9773-002.patch, HDFS-9773-003.patch, HDFS-9773.patch
>
>
> There are some dead code
> {code}
>   final boolean simulatedStorage = false;
>   if (simulatedStorage) {
> SimulatedFSDataset.setFactory(conf);
>   }
> {code}
> in TestShortCircuitLocalRead, TestFileAppend, TestFileAppend2, 
> TestFileAppend4, and TestLargeBlock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9768) Reuse objectMapper instance in HDFS to improve the performance

2016-02-12 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144805#comment-15144805
 ] 

Akira AJISAKA commented on HDFS-9768:
-

+1, the test failure looks unrelated to the patch.

> Reuse objectMapper instance in HDFS to improve the performance
> --
>
> Key: HDFS-9768
> URL: https://issues.apache.org/jira/browse/HDFS-9768
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9768.001.patch, HDFS-9768.002.patch, 
> HDFS-9768.003.patch
>
>
> In HDFS-9724, it have reused the ObjectMapper instance to improve the 
> performance in {{WebHDFS}}. But in other places, it still using ObjectMapper 
> by {{new ObjectMapper()}}. We probably need a comprehensive review across the 
> whole codebase to look for this pattern and modify them. In MAPREDUCE-6626, 
> YARN-4668, these jiras also work for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9798) TestHdfsNativeCodeLoader fails

2016-02-12 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-9798.
-
Resolution: Duplicate

Hi [~ajisakaa].  A recent Yetus change is preventing pre-commit from building 
libhadoop.so before running the HDFS tests.  We're tracking the fix in 
YETUS-281, and there is a patch in progress.

> TestHdfsNativeCodeLoader fails
> --
>
> Key: HDFS-9798
> URL: https://issues.apache.org/jira/browse/HDFS-9798
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>
> TestHdfsNativeCodeLoader fails intermittently in Jenkins.
> * 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14473/testReport/org.apache.hadoop.fs/TestHdfsNativeCodeLoader/testNativeCodeLoaded/
> * 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14475/testReport/org.apache.hadoop.fs/TestHdfsNativeCodeLoader/testNativeCodeLoaded/
> Error message
> {noformat}
> TestNativeCodeLoader: libhadoop.so testing was required, but libhadoop.so was 
> not loaded.  LD_LIBRARY_PATH = 
> ${env.LD_LIBRARY_PATH}:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
> {noformat}
> Stacktrace
> {noformat}
> java.lang.AssertionError: TestNativeCodeLoader: libhadoop.so testing was 
> required, but libhadoop.so was not loaded.  LD_LIBRARY_PATH = 
> ${env.LD_LIBRARY_PATH}:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.fs.TestHdfsNativeCodeLoader.testNativeCodeLoaded(TestHdfsNativeCodeLoader.java:46)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9798) TestHdfsNativeCodeLoader fails

2016-02-12 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144826#comment-15144826
 ] 

Akira AJISAKA commented on HDFS-9798:
-

Thanks [~cnauroth]!

> TestHdfsNativeCodeLoader fails
> --
>
> Key: HDFS-9798
> URL: https://issues.apache.org/jira/browse/HDFS-9798
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>
> TestHdfsNativeCodeLoader fails intermittently in Jenkins.
> * 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14473/testReport/org.apache.hadoop.fs/TestHdfsNativeCodeLoader/testNativeCodeLoaded/
> * 
> https://builds.apache.org/job/PreCommit-HDFS-Build/14475/testReport/org.apache.hadoop.fs/TestHdfsNativeCodeLoader/testNativeCodeLoaded/
> Error message
> {noformat}
> TestNativeCodeLoader: libhadoop.so testing was required, but libhadoop.so was 
> not loaded.  LD_LIBRARY_PATH = 
> ${env.LD_LIBRARY_PATH}:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
> {noformat}
> Stacktrace
> {noformat}
> java.lang.AssertionError: TestNativeCodeLoader: libhadoop.so testing was 
> required, but libhadoop.so was not loaded.  LD_LIBRARY_PATH = 
> ${env.LD_LIBRARY_PATH}:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.fs.TestHdfsNativeCodeLoader.testNativeCodeLoaded(TestHdfsNativeCodeLoader.java:46)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9533) seen_txid in the shared edits directory is modified during bootstrapping

2016-02-12 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144837#comment-15144837
 ] 

Kihwal Lee commented on HDFS-9533:
--

I haven't seen it actually creating problems other than our internal monitoring 
complaining about it. Since it is rare to do bootstrapStandby to existing HA 
clusters, we don't have many data points. 

> seen_txid in the shared edits directory is modified during bootstrapping
> 
>
> Key: HDFS-9533
> URL: https://issues.apache.org/jira/browse/HDFS-9533
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.6.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 3.0.0, 2.7.3
>
> Attachments: HDFS-9533.patch
>
>
> The last known transaction id is stored in the seen_txid file of all known 
> directories of a NNStorage when starting a new edit segment. However, we have 
> seen a case where it contains an id that falls in the middle of an edit 
> segment. This was the seen_txid file in the sahred edits directory.  The 
> active namenode's local storage was containing valid looking seen_txid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9793) upgrade or remove guava dependency

2016-02-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144843#comment-15144843
 ] 

Colin Patrick McCabe commented on HDFS-9793:


We should upgrade to Guava 15 in branch-3 (trunk).

> upgrade or remove guava dependency
> --
>
> Key: HDFS-9793
> URL: https://issues.apache.org/jira/browse/HDFS-9793
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Reporter: PJ Fanning
>
> http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs/2.7.1 
> indicates a dependency on guava 11.0.2.
> The StopWatch API changed in recent guava versions.
> Could we remove the dependency or upgrade to guava 15 that as the old 
> deprecated StopWatch constructor? Or alternatively, upgrade to latest guava 
> jar and modify any code that is affected.
> http://docs.guava-libraries.googlecode.com/git-history/v16.0/javadoc/index.html
> This would be an development line only change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9768) Reuse objectMapper instance in HDFS to improve the performance

2016-02-12 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-9768:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Commited this to trunk, branch-2, and branch-2.8. Thanks [~linyiqun] for the 
contribution!

> Reuse objectMapper instance in HDFS to improve the performance
> --
>
> Key: HDFS-9768
> URL: https://issues.apache.org/jira/browse/HDFS-9768
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Fix For: 2.8.0
>
> Attachments: HDFS-9768.001.patch, HDFS-9768.002.patch, 
> HDFS-9768.003.patch
>
>
> In HDFS-9724, it have reused the ObjectMapper instance to improve the 
> performance in {{WebHDFS}}. But in other places, it still using ObjectMapper 
> by {{new ObjectMapper()}}. We probably need a comprehensive review across the 
> whole codebase to look for this pattern and modify them. In MAPREDUCE-6626, 
> YARN-4668, these jiras also work for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HDFS-6953) HDFS file append failing in single node configuration

2016-02-12 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley closed HDFS-6953.


> HDFS file append failing in single node configuration
> -
>
> Key: HDFS-6953
> URL: https://issues.apache.org/jira/browse/HDFS-6953
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Ubuntu 12.01, Apache Hadoop 2.5.0 single node 
> configuration
>Reporter: Vladislav Falfushinsky
> Attachments: Main.java, core-site.xml, hdfs-site.xml, test_hdfs.c
>
>
> The following issue happens in both fully distributed and single node setup. 
> I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) 
> about simiral issue in multinode cluster and made some changes of my 
> configuration however it does not changed anything. The configuration files 
> and application sources are attached.
> Steps to reproduce:
> $ ./test_hdfs
> 2014-08-27 14:23:08,472 WARN  [Thread-5] hdfs.DFSClient 
> (DFSOutputStream.java:run(628)) - DataStreamer Exception
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed 
> datanode replacement policy is DEFAULT, and a client may configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> FSDataOutputStream#close error:
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed 
> datanode replacement policy is DEFAULT, and a client may configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> I have tried to run a simple example in java, that uses append function. It 
> failed too.
> I have tried to get hadoop environment settings from java application. It has 
> shown the default ones. Not the settings that ones that are mentioned in 
> core-site.xml and hdfs-site.xml files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6953) HDFS file append failing in single node configuration

2016-02-12 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley resolved HDFS-6953.
--
Resolution: Invalid

Corrected resolution field to 'invalid' since there was no fix.

> HDFS file append failing in single node configuration
> -
>
> Key: HDFS-6953
> URL: https://issues.apache.org/jira/browse/HDFS-6953
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Ubuntu 12.01, Apache Hadoop 2.5.0 single node 
> configuration
>Reporter: Vladislav Falfushinsky
> Attachments: Main.java, core-site.xml, hdfs-site.xml, test_hdfs.c
>
>
> The following issue happens in both fully distributed and single node setup. 
> I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) 
> about simiral issue in multinode cluster and made some changes of my 
> configuration however it does not changed anything. The configuration files 
> and application sources are attached.
> Steps to reproduce:
> $ ./test_hdfs
> 2014-08-27 14:23:08,472 WARN  [Thread-5] hdfs.DFSClient 
> (DFSOutputStream.java:run(628)) - DataStreamer Exception
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed 
> datanode replacement policy is DEFAULT, and a client may configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> FSDataOutputStream#close error:
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed 
> datanode replacement policy is DEFAULT, and a client may configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> I have tried to run a simple example in java, that uses append function. It 
> failed too.
> I have tried to get hadoop environment settings from java application. It has 
> shown the default ones. Not the settings that ones that are mentioned in 
> core-site.xml and hdfs-site.xml files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-6953) HDFS file append failing in single node configuration

2016-02-12 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley reopened HDFS-6953:
--

> HDFS file append failing in single node configuration
> -
>
> Key: HDFS-6953
> URL: https://issues.apache.org/jira/browse/HDFS-6953
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Ubuntu 12.01, Apache Hadoop 2.5.0 single node 
> configuration
>Reporter: Vladislav Falfushinsky
> Attachments: Main.java, core-site.xml, hdfs-site.xml, test_hdfs.c
>
>
> The following issue happens in both fully distributed and single node setup. 
> I have looked to the thread(https://issues.apache.org/jira/browse/HDFS-4600) 
> about simiral issue in multinode cluster and made some changes of my 
> configuration however it does not changed anything. The configuration files 
> and application sources are attached.
> Steps to reproduce:
> $ ./test_hdfs
> 2014-08-27 14:23:08,472 WARN  [Thread-5] hdfs.DFSClient 
> (DFSOutputStream.java:run(628)) - DataStreamer Exception
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed 
> datanode replacement policy is DEFAULT, and a client may configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> FSDataOutputStream#close error:
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed 
> datanode replacement policy is DEFAULT, and a client may configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:532)
> I have tried to run a simple example in java, that uses append function. It 
> failed too.
> I have tried to get hadoop environment settings from java application. It has 
> shown the default ones. Not the settings that ones that are mentioned in 
> core-site.xml and hdfs-site.xml files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9768) Reuse objectMapper instance in HDFS to improve the performance

2016-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144878#comment-15144878
 ] 

Hudson commented on HDFS-9768:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9293 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9293/])
HDFS-9768. Reuse ObjectMapper instance in HDFS to improve the (aajisaka: rev 
e6a7044b8530afded8f8e86ff309dd0e4d39238a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/oauth2/ConfRefreshTokenBasedAccessTokenProvider.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/oauth2/CredentialBasedAccessTokenProvider.java


> Reuse objectMapper instance in HDFS to improve the performance
> --
>
> Key: HDFS-9768
> URL: https://issues.apache.org/jira/browse/HDFS-9768
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Fix For: 2.8.0
>
> Attachments: HDFS-9768.001.patch, HDFS-9768.002.patch, 
> HDFS-9768.003.patch
>
>
> In HDFS-9724, it have reused the ObjectMapper instance to improve the 
> performance in {{WebHDFS}}. But in other places, it still using ObjectMapper 
> by {{new ObjectMapper()}}. We probably need a comprehensive review across the 
> whole codebase to look for this pattern and modify them. In MAPREDUCE-6626, 
> YARN-4668, these jiras also work for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9797) RequestHedgingProxyProvider is too verbose with Standby exceptions

2016-02-12 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-9797:
--
Status: Patch Available  (was: In Progress)

> RequestHedgingProxyProvider is too verbose with Standby exceptions
> --
>
> Key: HDFS-9797
> URL: https://issues.apache.org/jira/browse/HDFS-9797
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>Priority: Minor
> Attachments: HDFS-9797-v000.patch, HDFS-9797-v001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> {{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
> reports the exception for all the standby exceptions for all the other 
> namenodes. There is no point on reporting the standby exception if it's 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144891#comment-15144891
 ] 

Zhe Zhang commented on HDFS-8831:
-

[~xyao] [~arpitagarwal] [~leftnoteasy] [~andrew.wang] :

Since the new {{getCurrentTrashDir}} method is throwing an IOException, it 
could break other applications using it. Should we mark this change as 
incompatible? Alternatively, we should look at how to safely handle the 
exception within HDFS.

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9533) seen_txid in the shared edits directory is modified during bootstrapping

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144910#comment-15144910
 ] 

Zhe Zhang commented on HDFS-9533:
-

Thanks for clarifying this Kihwal.

> seen_txid in the shared edits directory is modified during bootstrapping
> 
>
> Key: HDFS-9533
> URL: https://issues.apache.org/jira/browse/HDFS-9533
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.6.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 3.0.0, 2.7.3
>
> Attachments: HDFS-9533.patch
>
>
> The last known transaction id is stored in the seen_txid file of all known 
> directories of a NNStorage when starting a new edit segment. However, we have 
> seen a case where it contains an id that falls in the middle of an edit 
> segment. This was the seen_txid file in the sahred edits directory.  The 
> active namenode's local storage was containing valid looking seen_txid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7452) Can we skip getCorruptFiles() call for standby NameNode..?

2016-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144956#comment-15144956
 ] 

Hadoop QA commented on HDFS-7452:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 
186 unchanged - 0 fixed = 187 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 24s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 127m 2s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/ji

[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-12 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144977#comment-15144977
 ] 

Chris Nauroth commented on HDFS-9711:
-

[~lmccay], thanks again.  I think we're basically on the same page.

bq. What do you think about going with option #2 and also pulling the 
handleHttpInteraction out into a CsrfUtils class.

The implementation of {{handleHttpInteraction}} needs access to the filter 
initialization parameters like the header and the methods to ignore.  I think 
we'd have to refactor all of the data members of the filter into the proposed 
{{CsrfUtils}} class.  If the class is actually holding state like that, then 
something like {{CsrfContext}} would likely be a better name.  
{{RestCsrfPreventionFilter}} would then be a very thin shim calling into 
{{CsrfContext}}.  The DataNode wouldn't use the filter class at all.  Instead, 
it would work with {{CsrfContext}}.  As a side benefit, the DataNode would no 
longer need to stub an implementation of {{FilterConfig}}.

Do you think it makes sense to go this far with the refactoring?  If so, I can 
put together a new patch revision.

> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch, HDFS-9711.004.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9799:

Status: Patch Available  (was: Open)

> Reimplement getCurrentTrashDir to remove incompatibility
> 
>
> Key: HDFS-9799
> URL: https://issues.apache.org/jira/browse/HDFS-9799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
>
> HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by 
> adding an IOException. This breaks other applications using this public API. 
> This JIRA aims to reimplement the logic to safely handle the IOException 
> within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-9799:
---

 Summary: Reimplement getCurrentTrashDir to remove incompatibility
 Key: HDFS-9799
 URL: https://issues.apache.org/jira/browse/HDFS-9799
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Blocker


HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by adding 
an IOException. This breaks other applications using this public API. This JIRA 
aims to reimplement the logic to safely handle the IOException within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9799:

Attachment: HDFS-9799.00.patch

The source of the {{IOException}} is from {{getEZForPath}}. So when 
{{getEZForPath}} gets an exception -- meaning that the EZ of the given path 
cannot be determined at the time of calling, we should just return the Trash 
dir of the user's home. Even if the path does belong to an EZ, this will just 
mean the {{rm}} will fail later. With the added WARN message, the calling 
application should be able to obtain the RC of the {{rm}} failure.

[~andrew.wang] [~atm] Could you take a look? Thanks.

> Reimplement getCurrentTrashDir to remove incompatibility
> 
>
> Key: HDFS-9799
> URL: https://issues.apache.org/jira/browse/HDFS-9799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
> Attachments: HDFS-9799.00.patch
>
>
> HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by 
> adding an IOException. This breaks other applications using this public API. 
> This JIRA aims to reimplement the logic to safely handle the IOException 
> within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145008#comment-15145008
 ] 

Larry McCay commented on HDFS-9711:
---

No, I don't think that is necessary to go that far.

+1 for removing the anonymous inner classes.

On Fri, Feb 12, 2016 at 1:05 PM, Chris Nauroth (JIRA) 



> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch, HDFS-9711.004.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9797) RequestHedgingProxyProvider is too verbose with Standby exceptions

2016-02-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145031#comment-15145031
 ] 

Arun Suresh commented on HDFS-9797:
---

It looks like the test case failures are unrelated and this being a logging 
improvement, is not easily testable via unit tests.
[~elgoiri], Can you confirm that it works as expected during manual tests ?
Will commit after that..



> RequestHedgingProxyProvider is too verbose with Standby exceptions
> --
>
> Key: HDFS-9797
> URL: https://issues.apache.org/jira/browse/HDFS-9797
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>Priority: Minor
> Attachments: HDFS-9797-v000.patch, HDFS-9797-v001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> {{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
> reports the exception for all the standby exceptions for all the other 
> namenodes. There is no point on reporting the standby exception if it's 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9644) Update encryption documentation to reflect nested EZs

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9644:

Issue Type: Improvement  (was: New Feature)

> Update encryption documentation to reflect nested EZs
> -
>
> Key: HDFS-9644
> URL: https://issues.apache.org/jira/browse/HDFS-9644
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, encryption
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9644.00.patch, HDFS-9644.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9797) RequestHedgingProxyProvider is too verbose with Standby exceptions

2016-02-12 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145038#comment-15145038
 ] 

Inigo Goiri commented on HDFS-9797:
---

Yes, I tested with a few thousand containers and they don't show the Standby 
exceptions and only the regular exceptions.

> RequestHedgingProxyProvider is too verbose with Standby exceptions
> --
>
> Key: HDFS-9797
> URL: https://issues.apache.org/jira/browse/HDFS-9797
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>Priority: Minor
> Attachments: HDFS-9797-v000.patch, HDFS-9797-v001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> {{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
> reports the exception for all the standby exceptions for all the other 
> namenodes. There is no point on reporting the standby exception if it's 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9124) NullPointerException when underreplicated blocks are there

2016-02-12 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-9124.
-
   Resolution: Duplicate
Fix Version/s: 2.7.4

It was fixed in HDFS-9574.

> NullPointerException when underreplicated blocks are there
> --
>
> Key: HDFS-9124
> URL: https://issues.apache.org/jira/browse/HDFS-9124
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Syed Akram
>Assignee: Syed Akram
> Fix For: 2.7.4
>
>
> 2015-09-22 09:48:47,830 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: dn1:50010:DataXceiver error 
> processing WRITE_BLOCK operation  src: /dn1:42973 dst: /dn2:50010
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:186)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:677)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9700) DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for DataTransferProtocol

2016-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145043#comment-15145043
 ] 

Hudson commented on HDFS-9700:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9294 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9294/])
HDFS-9700. DFSClient and DFSOutputStream should set TCP_NODELAY on (iwasakims: 
rev 372d1302c63c6f49f99be5766c5da9647ebd9ca6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


> DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for 
> DataTransferProtocol
> 
>
> Key: HDFS-9700
> URL: https://issues.apache.org/jira/browse/HDFS-9700
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1, 2.6.3
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: HDFS-9700-branch-2.7.002.patch, 
> HDFS-9700-branch-2.7.003.patch, HDFS-9700-v1.patch, HDFS-9700-v2.patch, 
> HDFS-9700.002.patch, HDFS-9700.003.patch, HDFS-9700.004.patch, 
> HDFS-9700_branch-2.7-v2.patch, HDFS-9700_branch-2.7.patch
>
>
> In {{DFSClient.connectToDN()}} and 
> {{DFSOutputStream.createSocketForPipeline()}}, we never call 
> {{setTcpNoDelay()}} on the constructed socket before sending.  In both cases, 
> we should respect the value of ipc.client.tcpnodelay in the configuration.
> While this applies whether security is enabled or not, it seems to have a 
> bigger impact on latency when security is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9644) Update encryption documentation to reflect nested EZs

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9644:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Update encryption documentation to reflect nested EZs
> -
>
> Key: HDFS-9644
> URL: https://issues.apache.org/jira/browse/HDFS-9644
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, encryption
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-9644.00.patch, HDFS-9644.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9644) Update encryption documentation to reflect nested EZs

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9644:

Hadoop Flags: Reviewed

Thanks Andrew! Committed to trunk, branch-2, and branch-2.8.

> Update encryption documentation to reflect nested EZs
> -
>
> Key: HDFS-9644
> URL: https://issues.apache.org/jira/browse/HDFS-9644
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, encryption
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-9644.00.patch, HDFS-9644.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145055#comment-15145055
 ] 

Andrew Wang commented on HDFS-9799:
---

Throwing a new checked exception is definitely incompatible, good find.

How about the getTrashRoots method that was added which also throws 
IOException? Seems like we should do a similar change for parity.

> Reimplement getCurrentTrashDir to remove incompatibility
> 
>
> Key: HDFS-9799
> URL: https://issues.apache.org/jira/browse/HDFS-9799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
> Attachments: HDFS-9799.00.patch
>
>
> HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by 
> adding an IOException. This breaks other applications using this public API. 
> This JIRA aims to reimplement the logic to safely handle the IOException 
> within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9644) Update encryption documentation to reflect nested EZs

2016-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145071#comment-15145071
 ] 

Hudson commented on HDFS-9644:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9295 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9295/])
HDFS-9644. Update encryption documentation to reflect nested EZs. (zhz) (zhz: 
rev b21bbe9ed1baae1a3b8b8dcb984f1d08930109a0)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/TransparentEncryption.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update encryption documentation to reflect nested EZs
> -
>
> Key: HDFS-9644
> URL: https://issues.apache.org/jira/browse/HDFS-9644
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, encryption
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-9644.00.patch, HDFS-9644.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9794) Streamer threads may leak if failure happens when closing the striped outputstream

2016-02-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9794:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk. Thanks for the review, Nicholas and Mingliang!

> Streamer threads may leak if failure happens when closing the striped 
> outputstream
> --
>
> Key: HDFS-9794
> URL: https://issues.apache.org/jira/browse/HDFS-9794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Namit Maheshwari
>Assignee: Jing Zhao
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-9794.000.patch, HDFS-9794.001.patch
>
>
> When closing the DFSStripedOutputStream, if failures happen while flushing 
> out the data/parity blocks, the streamer threads will not be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9700) DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for DataTransferProtocol

2016-02-12 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9700:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2.8 and above. Thanks, [~ghelmling], [~liuml07] and 
[~cmccabe].

> DFSClient and DFSOutputStream should set TCP_NODELAY on sockets for 
> DataTransferProtocol
> 
>
> Key: HDFS-9700
> URL: https://issues.apache.org/jira/browse/HDFS-9700
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1, 2.6.3
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Fix For: 2.8.0
>
> Attachments: HDFS-9700-branch-2.7.002.patch, 
> HDFS-9700-branch-2.7.003.patch, HDFS-9700-v1.patch, HDFS-9700-v2.patch, 
> HDFS-9700.002.patch, HDFS-9700.003.patch, HDFS-9700.004.patch, 
> HDFS-9700_branch-2.7-v2.patch, HDFS-9700_branch-2.7.patch
>
>
> In {{DFSClient.connectToDN()}} and 
> {{DFSOutputStream.createSocketForPipeline()}}, we never call 
> {{setTcpNoDelay()}} on the constructed socket before sending.  In both cases, 
> we should respect the value of ipc.client.tcpnodelay in the configuration.
> While this applies whether security is enabled or not, it seems to have a 
> bigger impact on latency when security is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9794) Streamer threads may leak if failure happens when closing the striped outputstream

2016-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145106#comment-15145106
 ] 

Hudson commented on HDFS-9794:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9296 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9296/])
HDFS-9794. Streamer threads may leak if failure happens when closing the 
(jing9: rev f3c91a41a5bf16542eca7f09787eb1727fd18e08)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Streamer threads may leak if failure happens when closing the striped 
> outputstream
> --
>
> Key: HDFS-9794
> URL: https://issues.apache.org/jira/browse/HDFS-9794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Namit Maheshwari
>Assignee: Jing Zhao
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-9794.000.patch, HDFS-9794.001.patch
>
>
> When closing the DFSStripedOutputStream, if failures happen while flushing 
> out the data/parity blocks, the streamer threads will not be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9754) Avoid unnecessary getBlockCollection calls in BlockManager

2016-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145105#comment-15145105
 ] 

Hudson commented on HDFS-9754:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9296 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9296/])
HDFS-9754. Avoid unnecessary getBlockCollection calls in BlockManager. (jing9: 
rev 972782d9568e0849484c027f27c1638ba50ec56e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMetadataConsistency.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java


> Avoid unnecessary getBlockCollection calls in BlockManager
> --
>
> Key: HDFS-9754
> URL: https://issues.apache.org/jira/browse/HDFS-9754
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9754.000.patch, HDFS-9754.001.patch, 
> HDFS-9754.002.patch
>
>
> Currently BlockManager calls {{Namesystem#getBlockCollection}} in order to:
> 1. check if the block has already been abandoned
> 2. identify the storage policy of the block
> 3. meta save
> For #1 we can use BlockInfo's internal state instead of checking if the 
> corresponding file still exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9797) Log Standby exceptions thrown by RequestHedgingProxyProvider as DEBUG

2016-02-12 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-9797:
--
Summary: Log Standby exceptions thrown by RequestHedgingProxyProvider as 
DEBUG  (was: RequestHedgingProxyProvider is too verbose with Standby exceptions)

> Log Standby exceptions thrown by RequestHedgingProxyProvider as DEBUG
> -
>
> Key: HDFS-9797
> URL: https://issues.apache.org/jira/browse/HDFS-9797
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>Priority: Minor
> Attachments: HDFS-9797-v000.patch, HDFS-9797-v001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> {{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
> reports the exception for all the standby exceptions for all the other 
> namenodes. There is no point on reporting the standby exception if it's 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9797) Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level

2016-02-12 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-9797:
--
Summary: Log Standby exceptions thrown by RequestHedgingProxyProvider at 
DEBUG Level  (was: Log Standby exceptions thrown by RequestHedgingProxyProvider 
as DEBUG)

> Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level
> ---
>
> Key: HDFS-9797
> URL: https://issues.apache.org/jira/browse/HDFS-9797
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>Priority: Minor
> Attachments: HDFS-9797-v000.patch, HDFS-9797-v001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> {{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
> reports the exception for all the standby exceptions for all the other 
> namenodes. There is no point on reporting the standby exception if it's 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9799:

Attachment: HDFS-9799.01.patch

Thanks Andrew for the review.

The IOException logic in {{getTrashRoots}} is a little more complicated. Even 
the unencrypted case throws an exception caused by {{exists}} etc. If we 
swallow those exceptions we need to think about a meaningful return value 
(e.g., an empty collection?). Returning an empty collection also means we won't 
have detailed exception about which path caused issues.

> Reimplement getCurrentTrashDir to remove incompatibility
> 
>
> Key: HDFS-9799
> URL: https://issues.apache.org/jira/browse/HDFS-9799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
> Attachments: HDFS-9799.00.patch, HDFS-9799.01.patch
>
>
> HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by 
> adding an IOException. This breaks other applications using this public API. 
> This JIRA aims to reimplement the logic to safely handle the IOException 
> within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9796) Add throttler for datanode bandwidth

2016-02-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145117#comment-15145117
 ] 

Arpit Agarwal commented on HDFS-9796:
-

Hi [~Guocui], can you add some more details in the description? What is the 
problem this is intending to solve? Thanks.

> Add throttler for datanode bandwidth
> 
>
> Key: HDFS-9796
> URL: https://issues.apache.org/jira/browse/HDFS-9796
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Guocui Mi
>Priority: Minor
>
> Add throttler for datanode bandwidth



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145121#comment-15145121
 ] 

Zhe Zhang commented on HDFS-9799:
-

The v01 patch added some formatting and Javadoc changes.

> Reimplement getCurrentTrashDir to remove incompatibility
> 
>
> Key: HDFS-9799
> URL: https://issues.apache.org/jira/browse/HDFS-9799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
> Attachments: HDFS-9799.00.patch, HDFS-9799.01.patch
>
>
> HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by 
> adding an IOException. This breaks other applications using this public API. 
> This JIRA aims to reimplement the logic to safely handle the IOException 
> within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9797) Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level

2016-02-12 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-9797:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8

> Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level
> ---
>
> Key: HDFS-9797
> URL: https://issues.apache.org/jira/browse/HDFS-9797
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9797-v000.patch, HDFS-9797-v001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> {{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
> reports the exception for all the standby exceptions for all the other 
> namenodes. There is no point on reporting the standby exception if it's 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9425) Expose number of blocks per volume as a metric

2016-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145130#comment-15145130
 ] 

Hadoop QA commented on HDFS-9425:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 
150 unchanged - 0 fixed = 151 total (was 150) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 39s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 34s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 197m 0s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoo

[jira] [Commented] (HDFS-9797) Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level

2016-02-12 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145132#comment-15145132
 ] 

Inigo Goiri commented on HDFS-9797:
---

Thank you [~asuresh] very much for the review and the commit!

> Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level
> ---
>
> Key: HDFS-9797
> URL: https://issues.apache.org/jira/browse/HDFS-9797
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9797-v000.patch, HDFS-9797-v001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> {{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
> reports the exception for all the standby exceptions for all the other 
> namenodes. There is no point on reporting the standby exception if it's 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9688) Test the effect of nested encryption zones in HDFS downgrade

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145147#comment-15145147
 ] 

Zhe Zhang commented on HDFS-9688:
-

So you mean complete deletion of nested EZs (instead of deleting into Trash)?

> Test the effect of nested encryption zones in HDFS downgrade
> 
>
> Key: HDFS-9688
> URL: https://issues.apache.org/jira/browse/HDFS-9688
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: encryption, test
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9688-branch-2.6.00.patch, 
> HDFS-9688-branch-2.6.01.patch, HDFS-9688-branch-2.7.03.patch, 
> HDFS-9688-branch-2.8.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9688) Test the effect of nested encryption zones in HDFS downgrade

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145155#comment-15145155
 ] 

Zhe Zhang commented on HDFS-9688:
-

Thanks Andrew for the suggestion re:pre-commit. I think it's worthwhile copying 
the docker image and enabling 2.6/2.7 patches to be tested. I'll work on that.

> Test the effect of nested encryption zones in HDFS downgrade
> 
>
> Key: HDFS-9688
> URL: https://issues.apache.org/jira/browse/HDFS-9688
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: encryption, test
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9688-branch-2.6.00.patch, 
> HDFS-9688-branch-2.6.01.patch, HDFS-9688-branch-2.7.03.patch, 
> HDFS-9688-branch-2.8.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9800) Add option to send BlockStateChange logs to alternate async appender

2016-02-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-9800:
---

 Summary: Add option to send BlockStateChange logs to alternate 
async appender
 Key: HDFS-9800
 URL: https://issues.apache.org/jira/browse/HDFS-9800
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.0
Reporter: Arpit Agarwal


BlockStateChange logs are usually suppressed in production to avoid the 
performance impact. It is often useful to enable them for troubleshooting but 
the verbosity of the logs slows down the NameNode.

It would be good to have an option to send BlockStateChange logs to an 
alternate async appender to limit the performance impact. There would be no 
change to the default behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9797) Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level

2016-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145158#comment-15145158
 ] 

Hudson commented on HDFS-9797:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9297 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9297/])
HDFS-9797. Log Standby exceptions thrown by RequestHedgingProxyProvider (arun 
suresh: rev 9fdfb546fb67526ba261da5cbd005f33e0f1d9e1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java


> Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level
> ---
>
> Key: HDFS-9797
> URL: https://issues.apache.org/jira/browse/HDFS-9797
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9797-v000.patch, HDFS-9797-v001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> {{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
> reports the exception for all the standby exceptions for all the other 
> namenodes. There is no point on reporting the standby exception if it's 
> expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-12 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-9711:

Attachment: HDFS-9711.005.patch

Here is patch v005.  The only changes since v004 are in 
{{RestCsrfPreventionFilter}} and {{RestCsrfPreventionFilterHandler}}.  I 
refactored into named classes, and I also tried to clarify the phrasing of the 
comment on the {{HttpInteraction}} interface.

> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch, HDFS-9711.004.patch, HDFS-9711.005.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9754) Avoid unnecessary getBlockCollection calls in BlockManager

2016-02-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9754:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks for the review, [~szetszwo] 
and [~vinayrpet]!

> Avoid unnecessary getBlockCollection calls in BlockManager
> --
>
> Key: HDFS-9754
> URL: https://issues.apache.org/jira/browse/HDFS-9754
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.9.0
>
> Attachments: HDFS-9754.000.patch, HDFS-9754.001.patch, 
> HDFS-9754.002.patch
>
>
> Currently BlockManager calls {{Namesystem#getBlockCollection}} in order to:
> 1. check if the block has already been abandoned
> 2. identify the storage policy of the block
> 3. meta save
> For #1 we can use BlockInfo's internal state instead of checking if the 
> corresponding file still exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9671) DiskBalancer : SubmitPlan implementation

2016-02-12 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145180#comment-15145180
 ] 

Lei (Eddy) Xu commented on HDFS-9671:
-

Hi, [~anu]

Thanks a lot for the updates. It LGTM overall. Will +1 after addressing one 
small nit:

{code:title=DiskBalancer.java}
public void shutdown() {
try {
   lock.lock();
{code}

Could you move {{lock.lock()}} out of {{try..finally}}?

Thanks!


> DiskBalancer : SubmitPlan implementation 
> -
>
> Key: HDFS-9671
> URL: https://issues.apache.org/jira/browse/HDFS-9671
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9671-HDFS-1312.001.patch, 
> HDFS-9671-HDFS-1312.002.patch, HDFS-9671-HDFS-1312.003.patch, 
> HDFS-9671-HDFS-1312.004.patch
>
>
> Datanode side code for submit plan for diskbalancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9773) Remove dead code related to SimulatedFSDataset in tests

2016-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145184#comment-15145184
 ] 

Hadoop QA commented on HDFS-9773:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 14s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 3s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 203m 18s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.fs.TestHdfsNativeCodeLoader |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | had

[jira] [Commented] (HDFS-9797) Log Standby exceptions thrown by RequestHedgingProxyProvider at DEBUG Level

2016-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145193#comment-15145193
 ] 

Hadoop QA commented on HDFS-9797:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 48s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 148m 8s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestFileAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787628/HDFS-9797-v001.patch |
| JIRA Issue | HDFS-9797 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f37cc8d928f9 

[jira] [Commented] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145215#comment-15145215
 ] 

Hadoop QA commented on HDFS-9799:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
56s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
20s {color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 27s 
{color} | {color:red} root: patch generated 1 new + 191 unchanged - 0 fixed = 
192 total (was 191) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 51s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 0s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} Patch does not generate ASF License warn

[jira] [Moved] (HDFS-9801) ReconfigurableBase should update the cached configuration

2016-02-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved HADOOP-12746 to HDFS-9801:
--

Affects Version/s: (was: 2.8.0)
   2.8.0
  Key: HDFS-9801  (was: HADOOP-12746)
  Project: Hadoop HDFS  (was: Hadoop Common)

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HDFS-9801
> URL: https://issues.apache.org/jira/browse/HDFS-9801
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2016-02-12 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7766:
---
Attachment: HDFS-7766.05.patch

Here's a rebased patch 

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7766.01.patch, HDFS-7766.02.patch, 
> HDFS-7766.03.patch, HDFS-7766.04.patch, HDFS-7766.04.patch, HDFS-7766.05.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[&noredirect=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9801) ReconfigurableBase should update the cached configuration

2016-02-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9801:

Component/s: datanode

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HDFS-9801
> URL: https://issues.apache.org/jira/browse/HDFS-9801
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9801) ReconfigurableBase should update the cached configuration

2016-02-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145259#comment-15145259
 ] 

Arpit Agarwal commented on HDFS-9801:
-

Thanks for the code review [~jingzhao]. I moved this to HDFS.

The checkstyle issues are bogus. I will rerun all the failed tests locally to 
make sure they are unrelated and commit this shortly.

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HDFS-9801
> URL: https://issues.apache.org/jira/browse/HDFS-9801
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9801) ReconfigurableBase should update the cached configuration

2016-02-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9801:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed for 2.8.0. Thank you for the code reviews Jing and Xiaobing.

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HDFS-9801
> URL: https://issues.apache.org/jira/browse/HDFS-9801
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9801) ReconfigurableBase should update the cached configuration

2016-02-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145269#comment-15145269
 ] 

ASF GitHub Bot commented on HDFS-9801:
--

Github user arp7 closed the pull request at:

https://github.com/apache/hadoop/pull/73


> ReconfigurableBase should update the cached configuration
> -
>
> Key: HDFS-9801
> URL: https://issues.apache.org/jira/browse/HDFS-9801
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9802) Selectively save blocks to trash dir during rolling upgrades

2016-02-12 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-9802:


 Summary: Selectively save blocks to trash dir during rolling 
upgrades
 Key: HDFS-9802
 URL: https://issues.apache.org/jira/browse/HDFS-9802
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


Currently datanodes save any invalidated blocks to the trash directory during a 
rolling upgrade. Compared to the "previous" directory for full upgrade, the 
trash can grow quickly. This is especially true when blocks are created and 
then quickly deleted.

Since trash is mainly meant to be defense against faulty new namenode and used 
for rolling back, saving new blocks in trash does not add much value. If 
anything, datanodes run out of space quicker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9802) Selectively save blocks to trash dir during rolling upgrades

2016-02-12 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145279#comment-15145279
 ] 

Kihwal Lee commented on HDFS-9802:
--

It should be possible to selectively save blocks to trash based on the file's 
time stamp. Although {{RollingUpgradeStatus}} does not provide start time, we 
can either add a filed or make datanode record the time when it gets a relevant 
heartbeat response.

> Selectively save blocks to trash dir during rolling upgrades
> 
>
> Key: HDFS-9802
> URL: https://issues.apache.org/jira/browse/HDFS-9802
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> Currently datanodes save any invalidated blocks to the trash directory during 
> a rolling upgrade. Compared to the "previous" directory for full upgrade, the 
> trash can grow quickly. This is especially true when blocks are created and 
> then quickly deleted.
> Since trash is mainly meant to be defense against faulty new namenode and 
> used for rolling back, saving new blocks in trash does not add much value. If 
> anything, datanodes run out of space quicker.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9801) ReconfigurableBase should update the cached configuration

2016-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145284#comment-15145284
 ] 

Hudson commented on HDFS-9801:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9298 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9298/])
HDFS-9801. ReconfigurableBase should update the cached configuration. (arp: rev 
1de1641f17f890059e85e57304ce33c7070a08de)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Reconfigurable.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestReconfiguration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurableBase.java


> ReconfigurableBase should update the cached configuration
> -
>
> Key: HDFS-9801
> URL: https://issues.apache.org/jira/browse/HDFS-9801
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8831:

Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-02-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145304#comment-15145304
 ] 

Arpit Agarwal commented on HDFS-8831:
-

Hi [~zhz], we should definitely fix it since we don't want an incompatible 
change in 2.8.0. It looks like {{TrashPolicyDefault#getCurrentTrashDir}} never 
throws IOException so we can simply remove the {{throws IOException}} 
annotation. Thanks for reporting this.

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-02-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145304#comment-15145304
 ] 

Arpit Agarwal edited comment on HDFS-8831 at 2/12/16 9:12 PM:
--

Hi [~zhz], we should definitely fix it since we don't want an incompatible 
change in 2.8.0. {{DistributedFileSystem#getTrashRoot}} does throw so we should 
just fix it. We can remove the IncompatibleChange label after the fix. I will 
review your fix patch. Thanks for reporting this.


was (Author: arpitagarwal):
Hi [~zhz], we should definitely fix it since we don't want an incompatible 
change in 2.8.0. It looks like {{TrashPolicyDefault#getCurrentTrashDir}} never 
throws IOException so we can simply remove the {{throws IOException}} 
annotation. Thanks for reporting this.

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145310#comment-15145310
 ] 

Zhe Zhang commented on HDFS-8831:
-

Thanks Arpit. I think we should also decide whether to revert this change and 
redo it (combined with HDFS-9799 change), or keep them as 2 separate commits. 
If keeping them separate, technically we still need to mark this as 
incompatible right?

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145311#comment-15145311
 ] 

Hadoop QA commented on HDFS-9799:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 11s 
{color} | {color:red} root: patch generated 1 new + 190 unchanged - 1 fixed = 
191 total (was 191) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 1s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 19s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF Licen

[jira] [Updated] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9799:

Attachment: HDFS-9799.02.patch

Updating the patch to fix checkstyle issue. The reported test failure is 
unrelated and passes locally.

> Reimplement getCurrentTrashDir to remove incompatibility
> 
>
> Key: HDFS-9799
> URL: https://issues.apache.org/jira/browse/HDFS-9799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
> Attachments: HDFS-9799.00.patch, HDFS-9799.01.patch, 
> HDFS-9799.02.patch
>
>
> HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by 
> adding an IOException. This breaks other applications using this public API. 
> This JIRA aims to reimplement the logic to safely handle the IOException 
> within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9734) Refactoring of checksum failure report related codes

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145325#comment-15145325
 ] 

Zhe Zhang commented on HDFS-9734:
-

Thanks [~drankye] for the refactoring work, and sorry for chiming in late. 
Could you rebase the patch? Also I think we should avoid wildcard import 
({{import java.util.*}}).

> Refactoring of checksum failure report related codes
> 
>
> Key: HDFS-9734
> URL: https://issues.apache.org/jira/browse/HDFS-9734
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12744-v1.patch, HADOOP-12744-v2.patch, 
> HDFS-9734-v3.patch, HDFS-9734-v4.patch
>
>
> This was from discussion with [~jingzhao] in HDFS-9646. There is some 
> duplicate codes between client and datanode sides:
> {code}
> private void addCorruptedBlock(ExtendedBlock blk, DatanodeInfo node,
> Map> corruptionMap) {
>   Set dnSet = corruptionMap.get(blk);
>   if (dnSet == null) {
> dnSet = new HashSet<>();
> corruptionMap.put(blk, dnSet);
>   }
>   if (!dnSet.contains(node)) {
> dnSet.add(node);
>   }
> }
> {code}
> This would resolve the duplication and also simplify the codes some bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-02-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145326#comment-15145326
 ] 

Arpit Agarwal commented on HDFS-8831:
-

bq. If keeping them separate, technically we still need to mark this as 
incompatible right?
Marking it as incompatible will show up in release notes which will be wrong if 
we can fix it in time for the release. I added a "breaks" link instead.

bq. I think we should also decide whether to revert this change and redo it 
(combined with HDFS-9799 change)
No need to revert if we can quickly fix HDFS-9799. If a fix looks impossible 
for 2.8.0 we can revert it. For now HDFS-9799 is rightly tagged as a blocker so 
there is no risk of missing it. Agreed?

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9657) Schedule EC tasks at proper time to reduce the impact of recovery traffic

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145342#comment-15145342
 ] 

Zhe Zhang commented on HDFS-9657:
-

This is an interesting idea. However, when some internal blocks of a group are 
missing, the entire block group is *at risk*. Should we really wait until a 
certain wall clock time to start reconstruction? Looks like throttling is a 
better idea and I believe we are already throttling the amount of DN work in a 
given time window.

> Schedule EC tasks at proper time to reduce the impact of recovery traffic
> -
>
> Key: HDFS-9657
> URL: https://issues.apache.org/jira/browse/HDFS-9657
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-9657-001.patch, HDFS-9657-002.patch
>
>
> The EC recover tasks consume a lot of network bandwidth and disk I/O. 
> Recovering a corrupt block requires transferring 6 blocks , hence creating a 
> 6X overhead in network bandwidth and disk I/O.  When a datanode fails , the 
> recovery of the whole blocks on this datanode may use up the network 
> bandwith.  We need to start a recovery task at a proper time in order to give 
> less impact to the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145345#comment-15145345
 ] 

Andrew Wang commented on HDFS-9799:
---

>From an API point of view, these two methods should be similar. Another flaw 
>with getTrashRoots is that an exception for any single trash root fails the 
>entire call. There's no ability to do a partial return, which IMO is better 
>since it gives the caller flexibility.

Regarding error handling, pre HDFS-8831 it looks like TrashPolicyDefault would 
just catch and log exceptions. So the exception itself isn't being used for 
much.

> Reimplement getCurrentTrashDir to remove incompatibility
> 
>
> Key: HDFS-9799
> URL: https://issues.apache.org/jira/browse/HDFS-9799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
> Attachments: HDFS-9799.00.patch, HDFS-9799.01.patch, 
> HDFS-9799.02.patch
>
>
> HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by 
> adding an IOException. This breaks other applications using this public API. 
> This JIRA aims to reimplement the logic to safely handle the IOException 
> within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-02-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145346#comment-15145346
 ] 

Zhe Zhang commented on HDFS-8831:
-

Sounds good to me. I don't expect HDFS-9799 to take too long.

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9803) Proactively refresh ShortCircuitCache entries to avoid latency spikes

2016-02-12 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HDFS-9803:
--

 Summary: Proactively refresh ShortCircuitCache entries to avoid 
latency spikes
 Key: HDFS-9803
 URL: https://issues.apache.org/jira/browse/HDFS-9803
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Nick Dimiduk


My region server logs are flooding with messages like 
"SecretManager$InvalidToken: access control error while attempting to set up 
short-circuit access to  ... is expired". These logs correspond 
with responseTooSlow WARNings from the region server.

{noformat}
2016-01-19 22:10:14,432 INFO  [B.defaultRpcServer.handler=4,queue=1,port=16020] 
shortcircuit.ShortCircuitCache: ShortCircuitCache(0x71bdc547): could not load 
1074037633_BP-1145309065-XXX-1448053136416 due to InvalidToken exception.
org.apache.hadoop.security.token.SecretManager$InvalidToken: access control 
error while attempting to set up short-circuit access to  token 
with block_token_identifier (expiryDate=1453194430724, keyId=1508822027, 
userId=hbase, blockPoolId=BP-1145309065-XXX-1448053136416, blockId=1074037633, 
access modes=[READ]) is expired.
at 
org.apache.hadoop.hdfs.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:591)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:490)
at 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:782)
at 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:716)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422)
at 
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:618)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:678)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1372)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1591)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
...
{noformat}

A potential solution could be to have a background thread that makes a best 
effort to proactively refreshes tokens in the cache before they expire, so as 
to minimize latency impact on the critical path.

Thanks to [~cnauroth] for providing an explaination and suggesting a solution 
over on the [user 
list|http://mail-archives.apache.org/mod_mbox/hadoop-user/201601.mbox/%3CCANZa%3DGt%3Dhvuf3fyOJqf-jdpBPL_xDknKBcp7LmaC-YUm0jDUVg%40mail.gmail.com%3E].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-02-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9781:

Attachment: HDFS-9781.01.patch

Thanks for creating this [~jojochuang].

The 'A failed test' link is no longer valid. :( But I've managed to reproduce 
this in about 1% frequency. I think there're 2 problems here:
# Test timeout without obvious information (so that people have to see the code 
to know why it timed out).
# NPE

Patch 1 is attached to address them:
# Adds more information to the test, and also waits for BR to be received 
before releasing the reference. BTW, the test doesn't care whether a exception 
is thrown, so IMO no {{assertExceptionContains}} is needed.
# This is from the changes in HDFS-9701: during {{wait}}, the thread is put on 
hold and other thread may proceed (getBlockReport, in this case). Since the 
{{volumeMap}} is not yet cleared it's possible for the BR thread to get a null 
volume. Eddy and I discussed this in HDFS-9701, but at that time I was using 
{{Thread.sleep}} which holds the lock. A later findbugs warning made me to 
switch to {{wait}} (which IMO is the right thing to do), but then the sequence 
should've be modified to make sure internal states such as {{volumeMap}} is 
safe.

[~eddyxu], could you please review? Thanks.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume r

[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-02-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145381#comment-15145381
 ] 

Wei-Chiu Chuang commented on HDFS-9781:
---

[~xiaochen] Thanks for taking time to reproduce the issue.
This issue is pretty frequently reproducible on my machine. The timeout is 
directly due to NPE: the main threads waits for {{BlockReportThread}} to count 
down {{brReceivedLatch}}, but due to the NPE, the control jumps out of the 
block and never count down the latch.

In {{FsDatasetImpl.getBlockReports()}}, b.getVolume().getStorageID() returns a 
storage id, but for some reason builders (which is a {{Map}}) can't find the 
key, and therefore returned a NPE.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9698) Long running Balancer should renew TGT

2016-02-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145403#comment-15145403
 ] 

Xiao Chen commented on HDFS-9698:
-

FWIW, I think I'd briefly update here for future reference. Thanks all for the 
helpful comments above!

I've seen similar problems, that the Balancer fails with {{Failed to find any 
Kerberos tgt}} after several hours. The problem turns out to be Kerberos usage 
IMHO, and not a bug in hadoop.

According to [Kerberos 
docs|http://web.mit.edu/kerberos/krb5-1.13/doc/admin/conf_files/krb5_conf.html],
 there're {{ticket_lifetime}} and {{renew_lifetime}}. The former being the 
lifetime of the TGT, which it can be renewed to extend to a maximum value of 
the later.
In the failure scenario, a TGT is generated by the user and provided to the 
balancer (which means in the balancer context, 
{{UserGroupInformation.isLoginTicketBased() == true}}). 
{{client#handleSaslConnectionFailure}} is behaving correctly on extending the 
{{ticket_lifetime}}. But there's no way to extend beyond the 
{{renew_lifetime}}, and I think a new TGT has to be generated which should not 
be hadoop's responsibility in this case.

> Long running Balancer should renew TGT
> --
>
> Key: HDFS-9698
> URL: https://issues.apache.org/jira/browse/HDFS-9698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, security
>Affects Versions: 2.6.3
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9698.00.patch
>
>
> When the {{Balancer}} runs beyond the configured TGT lifetime, the current 
> logic won't renew TGT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9698) Long running Balancer should renew TGT

2016-02-12 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145418#comment-15145418
 ] 

Chris Nauroth commented on HDFS-9698:
-

Running balancer as an interactive process, the assumption is that kinit has 
already run, and there is a ticket sitting in cache.  After the renew_lifetime, 
there isn't anything else the balancer process can do to help.  It can't 
automatically kinit again or otherwise prompt the user to login again.

Maybe it would be nice to give the balancer the ability to login from a keytab? 
 That way, the RPC client would re-login from the keytab after expiration, 
which means the process could remain authenticated indefinitely.  With some 
people wanting to run balancer non-stop in "daemon mode", that might be a 
reasonable feature to add.

> Long running Balancer should renew TGT
> --
>
> Key: HDFS-9698
> URL: https://issues.apache.org/jira/browse/HDFS-9698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, security
>Affects Versions: 2.6.3
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9698.00.patch
>
>
> When the {{Balancer}} runs beyond the configured TGT lifetime, the current 
> logic won't renew TGT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9698) Long running Balancer should renew TGT

2016-02-12 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145432#comment-15145432
 ] 

Jing Zhao commented on HDFS-9698:
-

Agree. In the end we may want to run balancer as a daemon service like 
SecondaryNN.

> Long running Balancer should renew TGT
> --
>
> Key: HDFS-9698
> URL: https://issues.apache.org/jira/browse/HDFS-9698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, security
>Affects Versions: 2.6.3
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9698.00.patch
>
>
> When the {{Balancer}} runs beyond the configured TGT lifetime, the current 
> logic won't renew TGT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-02-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145439#comment-15145439
 ] 

Xiao Chen commented on HDFS-9781:
-

Chatted with [~jojochuang] offline, the test timeouts because of the NPE and 
the new patch should fix the NPE. It may be more understandable to add a 
try-catch in the {{BlockReportThread}} in the test, so that it can fail with a 
more explicit message instead of timeout.
I'll wait for Eddy's comments before next rev. :)

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9804) Allow long-running Balancer in Kerberized Environments

2016-02-12 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-9804:
---

 Summary: Allow long-running Balancer in Kerberized Environments
 Key: HDFS-9804
 URL: https://issues.apache.org/jira/browse/HDFS-9804
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Xiao Chen


>From the discussion of HDFS-9698, it might be nice to allow the balancer to 
>run as a daemon and login from a keytab.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9799) Reimplement getCurrentTrashDir to remove incompatibility

2016-02-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9799:

Attachment: HDFS-9799.03.patch

Thanks Andrew. I took another look at {{getTrashRoots}}. Agreed that returning 
the collection of trash root dirs before exception is safe and is more 
flexible. E.g. empitier can process those returned dirs. Attaching v03 patch to 
address.

> Reimplement getCurrentTrashDir to remove incompatibility
> 
>
> Key: HDFS-9799
> URL: https://issues.apache.org/jira/browse/HDFS-9799
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Blocker
> Attachments: HDFS-9799.00.patch, HDFS-9799.01.patch, 
> HDFS-9799.02.patch, HDFS-9799.03.patch
>
>
> HDFS-8831 changed the signature of {{TrashPolicy#getCurrentTrashDir}} by 
> adding an IOException. This breaks other applications using this public API. 
> This JIRA aims to reimplement the logic to safely handle the IOException 
> within HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9698) Long running Balancer should renew TGT

2016-02-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145451#comment-15145451
 ] 

Xiao Chen commented on HDFS-9698:
-

Thanks Chris and Jing for the quick response! FYI - I created HDFS-9804.

> Long running Balancer should renew TGT
> --
>
> Key: HDFS-9698
> URL: https://issues.apache.org/jira/browse/HDFS-9698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, security
>Affects Versions: 2.6.3
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9698.00.patch
>
>
> When the {{Balancer}} runs beyond the configured TGT lifetime, the current 
> logic won't renew TGT.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-02-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HDFS-9781:
---

Assignee: Xiao Chen  (was: Wei-Chiu Chuang)

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
> Attachments: HDFS-9781.01.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2016-02-12 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15145469#comment-15145469
 ] 

Dave Latham commented on HDFS-7964:
---

Would love to see this get committed.

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >