[jira] [Updated] (HDFS-9196) TestWebHdfsContentLength fails on trunk

2015-10-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9196:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

The patch looks good to me. +1. I've committed the fix to trunk and 2.8. Thanks 
for the fix, [~iwasakims]! Thanks for reporting the issue, [~ozawa]! And thanks 
for the review, [~liuml07]!

> TestWebHdfsContentLength fails on trunk
> ---
>
> Key: HDFS-9196
> URL: https://issues.apache.org/jira/browse/HDFS-9196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-9196.001.patch
>
>
> {quote}
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 181.278 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
> testPutOp(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  Time elapsed: 
> 60.05 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOp(TestWebHdfsContentLength.java:116)
> testPutOpWithRedirect(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  
> Time elapsed: 0.01 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[chunked]> but was:<[0]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOpWithRedirect(TestWebHdfsContentLength.java:130)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9196) Fix TestWebHdfsContentLength

2015-10-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9196:

Summary: Fix TestWebHdfsContentLength  (was: TestWebHdfsContentLength fails 
on trunk)

> Fix TestWebHdfsContentLength
> 
>
> Key: HDFS-9196
> URL: https://issues.apache.org/jira/browse/HDFS-9196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-9196.001.patch
>
>
> {quote}
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 181.278 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
> testPutOp(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  Time elapsed: 
> 60.05 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOp(TestWebHdfsContentLength.java:116)
> testPutOpWithRedirect(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  
> Time elapsed: 0.01 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[chunked]> but was:<[0]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOpWithRedirect(TestWebHdfsContentLength.java:130)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946371#comment-14946371
 ] 

Hudson commented on HDFS-9182:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #491 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/491/])
HDFS-9182. Cleanup the findbugs and other issues after HDFS EC merged to 
(umamahesh: rev 8b7339312cb06b7e021f8f9ea6e3a20ebf009af3)
* hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9206:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

The failed tests are unrelated. I've committed this to trunk. Thanks for the 
contribution, Walter! 

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9206:

Priority: Minor  (was: Trivial)

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8967) Create a BlockManagerLock class to represent the lock used in the BlockManager

2015-10-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8967:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-8966
   Status: Resolved  (was: Patch Available)

I've committed this to the feature branch.

> Create a BlockManagerLock class to represent the lock used in the BlockManager
> --
>
> Key: HDFS-8967
> URL: https://issues.apache.org/jira/browse/HDFS-8967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: HDFS-8966
>
> Attachments: HDFS-8967.000.patch, HDFS-8967.001.patch, 
> HDFS-8967.002.patch, HDFS-8967.003.patch
>
>
> This jira proposes to create a {{BlockManagerLock}} class to represent the 
> lock used in {{BlockManager}}.
> Currently it directly points to the {{FSNamesystem}} lock thus there are no 
> functionality changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946362#comment-14946362
 ] 

Hadoop QA commented on HDFS-9181:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 48s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 33s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 27s | The applied patch generated  2 
new checkstyle issues (total was 142, now 142). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 201m  1s | Tests failed in hadoop-hdfs. |
| | | 247m 53s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsContentLength |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765309/HDFS-9181.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1bca1bb |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/console |


This message was automatically generated.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch, HDFS-9181.003.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946318#comment-14946318
 ] 

Hudson commented on HDFS-9182:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8583/])
HDFS-9182. Cleanup the findbugs and other issues after HDFS EC merged to 
(umamahesh: rev 8b7339312cb06b7e021f8f9ea6e3a20ebf009af3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8967) Create a BlockManagerLock class to represent the lock used in the BlockManager

2015-10-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946315#comment-14946315
 ] 

Haohui Mai commented on HDFS-8967:
--

+1

> Create a BlockManagerLock class to represent the lock used in the BlockManager
> --
>
> Key: HDFS-8967
> URL: https://issues.apache.org/jira/browse/HDFS-8967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8967.000.patch, HDFS-8967.001.patch, 
> HDFS-8967.002.patch, HDFS-8967.003.patch
>
>
> This jira proposes to create a {{BlockManagerLock}} class to represent the 
> lock used in {{BlockManager}}.
> Currently it directly points to the {{FSNamesystem}} lock thus there are no 
> functionality changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946310#comment-14946310
 ] 

Hadoop QA commented on HDFS-4167:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765135/HDFS-4167.06.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 8b73393 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12828/console |


This message was automatically generated.

> Add support for restoring/rolling back to a snapshot
> 
>
> Key: HDFS-4167
> URL: https://issues.apache.org/jira/browse/HDFS-4167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Ajith S
> Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
> HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch, 
> HDFS-4167.05.patch, HDFS-4167.06.patch
>
>
> This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946299#comment-14946299
 ] 

Hadoop QA commented on HDFS-9206:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 43s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m  3s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 20s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   3m 35s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 126m 29s | Tests failed in hadoop-hdfs. |
| | | 170m 46s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestXAttrConfigFlag |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.namenode.TestFsck |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | 
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765318/HDFS-9206.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 1bca1bb |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12826/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12826/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12826/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12826/console |


This message was automatically generated.

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Trivial
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9064) NN old UI (block_info_xml) not available in 2.7.x

2015-10-06 Thread Kanaka Kumar Avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanaka Kumar Avvaru resolved HDFS-9064.
---
Resolution: Won't Fix

Thanks for inputs [~wheat9], I agree with you. Hence closing the JIRA as won't 
fix.  [~shahrs87], please feel free to open again if still you feel the 
alternatives like fsck is not sufficient in your use case.

> NN old UI (block_info_xml) not available in 2.7.x
> -
>
> Key: HDFS-9064
> URL: https://issues.apache.org/jira/browse/HDFS-9064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Kanaka Kumar Avvaru
>
> In 2.6.x hadoop deploys, given a blockId it was very easy to find out the 
> file name and the locations of replicas (also whether they are corrupt or 
> not).
> This was the REST call:
> {noformat}
>  http://:/block_info_xml.jsp?blockId=xxx
> {noformat}
> But this was removed by HDFS-6252 in 2.7 builds.
> Creating this jira to restore that functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8967) Create a BlockManagerLock class to represent the lock used in the BlockManager

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946285#comment-14946285
 ] 

Hadoop QA commented on HDFS-8967:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 23s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 25s | The applied patch generated  2 
new checkstyle issues (total was 541, now 540). |
| {color:green}+1{color} | whitespace |   0m  3s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 124m 53s | Tests failed in hadoop-hdfs. |
| | | 170m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestWebHdfsContentLength |
| Timed out tests | org.apache.hadoop.hdfs.TestDFSClientRetries |
|   | org.apache.hadoop.hdfs.server.namenode.TestFSImageWithAcl |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765313/HDFS-8967.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1bca1bb |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12825/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12825/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12825/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12825/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12825/console |


This message was automatically generated.

> Create a BlockManagerLock class to represent the lock used in the BlockManager
> --
>
> Key: HDFS-8967
> URL: https://issues.apache.org/jira/browse/HDFS-8967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8967.000.patch, HDFS-8967.001.patch, 
> HDFS-8967.002.patch, HDFS-8967.003.patch
>
>
> This jira proposes to create a {{BlockManagerLock}} class to represent the 
> lock used in {{BlockManager}}.
> Currently it directly points to the {{FSNamesystem}} lock thus there are no 
> functionality changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-06 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946288#comment-14946288
 ] 

Uma Maheswara Rao G commented on HDFS-9182:
---

Thanks for the reviews [~jingzhao] and [~hitliuyi]. I have just committed this 
to trunk.

Here is the windbags count from above report: Total Warnings0   0.00


> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946214#comment-14946214
 ] 

Yongjun Zhang commented on HDFS-9181:
-

HI [~jojochuang],

Would you please remove {{// Issue an warning.}}, +1 after that.

Thanks.



> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch, HDFS-9181.003.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946212#comment-14946212
 ] 

Hadoop QA commented on HDFS-9181:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  2s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 21s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 26s | The applied patch generated  2 
new checkstyle issues (total was 142, now 142). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 235m 59s | Tests failed in hadoop-hdfs. |
| | | 282m 38s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsContentLength |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765273/HDFS-9181.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6d5713a |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12820/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12820/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12820/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12820/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12820/console |


This message was automatically generated.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch, HDFS-9181.003.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946208#comment-14946208
 ] 

Jing Zhao commented on HDFS-9206:
-

+1. I will commit this after Jenkins approves.

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Trivial
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2015-10-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946195#comment-14946195
 ] 

Rakesh R commented on HDFS-8449:


Thanks [~libo-intel], latest patch looks good. I've submitted the patch to get 
jenkins report.

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2015-10-06 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8449:
---
Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-06 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946184#comment-14946184
 ] 

nijel commented on HDFS-9159:
-

bq. -1  release audit
This comment is not related to this patch

bq.-1   checkstyle
This is due to indentation is kept same as other blocks for readability.

bq.-1   hdfs tests
Test fails are not related to this patch

thanks


> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946160#comment-14946160
 ] 

Hadoop QA commented on HDFS-9182:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  28m 16s | Pre-patch trunk has 7 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m  8s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 47s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 17s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   4m 39s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 26  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   7m 35s | The patch appears to introduce 
742 new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   7m  7s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 118m 55s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 32s | Tests passed in 
hadoop-hdfs-client. |
| | | 190m 38s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-client |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.currentPacket; locked 96% of time  
Unsynchronized access at DFSStripedOutputStream.java:96% of time  
Unsynchronized access at DFSStripedOutputStream.java:[line 535] |
| Failed unit tests | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
| Timed out tests | 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765264/HDFSS-9182.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6d5713a |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12818/console |


This message was automatically generated.

> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9196) TestWebHdfsContentLength fails on trunk

2015-10-06 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946155#comment-14946155
 ] 

Masatake Iwasaki commented on HDFS-9196:


Thanks, [~liuml07]. Checkstyle and release audit warnings are not related to 
the fixed line.

> TestWebHdfsContentLength fails on trunk
> ---
>
> Key: HDFS-9196
> URL: https://issues.apache.org/jira/browse/HDFS-9196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9196.001.patch
>
>
> {quote}
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 181.278 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
> testPutOp(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  Time elapsed: 
> 60.05 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOp(TestWebHdfsContentLength.java:116)
> testPutOpWithRedirect(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  
> Time elapsed: 0.01 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[chunked]> but was:<[0]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOpWithRedirect(TestWebHdfsContentLength.java:130)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9206:

Assignee: Walter Su
  Status: Patch Available  (was: Open)

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Trivial
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9206:

Attachment: HDFS-9206.patch

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Priority: Trivial
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9206:

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-8031

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Priority: Trivial
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-06 Thread Walter Su (JIRA)
Walter Su created HDFS-9206:
---

 Summary: Inconsistent default value of 
dfs.datanode.stripedread.buffer.size
 Key: HDFS-9206
 URL: https://issues.apache.org/jira/browse/HDFS-9206
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Reporter: Walter Su
Priority: Trivial


{noformat}
DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;


  dfs.datanode.stripedread.buffer.size
  262144
  Datanode striped read buffer size.
  

{noformat}

Once before we used 256k cellSize, now we changed to 64k as default value.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8967) Create a BlockManagerLock class to represent the lock used in the BlockManager

2015-10-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8967:

Attachment: HDFS-8967.003.patch

Rebase the patch.

> Create a BlockManagerLock class to represent the lock used in the BlockManager
> --
>
> Key: HDFS-8967
> URL: https://issues.apache.org/jira/browse/HDFS-8967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8967.000.patch, HDFS-8967.001.patch, 
> HDFS-8967.002.patch, HDFS-8967.003.patch
>
>
> This jira proposes to create a {{BlockManagerLock}} class to represent the 
> lock used in {{BlockManager}}.
> Currently it directly points to the {{FSNamesystem}} lock thus there are no 
> functionality changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9205) Do not sehedule corrupted blocks for replication

2015-10-06 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-9205:
-

 Summary: Do not sehedule corrupted blocks for replication
 Key: HDFS-9205
 URL: https://issues.apache.org/jira/browse/HDFS-9205
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


Corrupted blocks by definition are blocks cannot be read. As a consequence, 
they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
from it.  It seems that scheduling corrupted block for replication is wasting 
resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946115#comment-14946115
 ] 

Hadoop QA commented on HDFS-9176:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  11m 55s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |  10m 43s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 53s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m 10s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 48s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 51s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  97m  4s | Tests failed in hadoop-hdfs. |
| | | 130m 14s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.web.TestWebHdfsContentLength |
| Timed out tests | org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner 
|
|   | org.apache.hadoop.hdfs.web.TestWebHDFSAcl |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765261/HDFS-9176.002.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 6d5713a |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12817/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12817/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12817/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12817/console |


This message was automatically generated.

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Attachment: HDFS-9181.003.patch

Use Log.trace() instead of Log.warn().

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch, HDFS-9181.003.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-06 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946085#comment-14946085
 ] 

Yi Liu commented on HDFS-9137:
--

I think it's OK to do the fix using this approach and update the 
{{BPOS#toString()}} in a follow-on.
The new patch looks good to me, +1 pending Jeninks, thanks Uma, Vinay, Colin. 
How do you think [~vinayrpet], [~cmccabe]?

> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-06 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946080#comment-14946080
 ] 

Yi Liu commented on HDFS-9182:
--

The patch looks good to me too. Thanks [~umamaheswararao] and [~jingzhao].
Since the patch is straight, and also need to rebase if any committer commits a 
new patch, so how about running a local test-patch at the meantime, and attach 
the local report, if the Jenkins still fails, we can refer to the local report 
and rebase/commit it directly if the local test-patch successes?

> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-06 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-9137:
--
Attachment: HDFS-9137.01-WithPreservingRootExceptions.patch

Attached the patch with small fix in my previous patch. Generally finally 
clauses are having the risk of suppressing root exceptions. So, here I was 
trying retain root exception as is. Hope this patch make it more clear.  I had 
offline chat with Yi to explain my thought on this and he agreed to that now.
-Thanks

> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945989#comment-14945989
 ] 

Hadoop QA commented on HDFS-4015:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m 47s | Pre-patch trunk has 8 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 53s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 20s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 44s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   7m 35s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   7m 33s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | yarn tests |   8m 54s | Tests passed in 
hadoop-yarn-server-nodemanager. |
| {color:red}-1{color} | hdfs tests | 216m  1s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 23s | Tests failed in 
hadoop-hdfs-client. |
| | | 287m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.hdfs.tools.TestGetGroups |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestWebHdfsContentLength |
|   | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| Timed out tests | org.apache.hadoop.hdfs.web.TestWebHDFSAcl |
| Failed build | hadoop-hdfs-client |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765220/HDFS-4015.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a8b4d0f |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/artifact/patchprocess/trunkFindbugsWarningshadoop-yarn-server-nodemanager.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12814/console |


This message was automatically generated.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by d

[jira] [Comment Edited] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945939#comment-14945939
 ] 

Arpit Agarwal edited comment on HDFS-4015 at 10/6/15 10:37 PM:
---

Hi [~anu], thanks for addressing the earlier feedback. Feedback on the v2 patch.

# We will likely see blocks with future generation stamps during intentional 
HDFS rollback. We should disable this check if NN has been restarted with a 
rollback option (either regular or rolling upgrade rollback).
# I apologize for not noticing this earlier. {{FsStatus}} is tagged as public 
and stable, so changing the constructor signature is incompatible. Instead we 
could add a new constructor that initializes {{bytesInFuture}}. This will also 
avoid changes to FileSystem, ViewFS, RawLocalFileSystem.
# fsck should also print this new counter. We can do it in a separate Jira.
# Don't consider this a binding but I would really like it if {{bytesInFuture}} 
can be renamed especially where it is exposed via public interfaces/metrics. It 
sounds confusing/ominous. {{bytesWithFutureGenerationStamps}} would be more 
precise.

Still reviewing the test cases.


was (Author: arpitagarwal):
Hi [~anu], thanks for addressing the earlier feedback. Feedback on the v2 patch.

# We will likely see blocks with future generation stamps during intentional 
HDFS rollback. We should disable this check if NN has been restarted with a 
rollback option (either regular or rolling upgrade rollback).
# I apologize for not noticing this earlier. {{FsStatus}} is tagged as public 
and stable, so changing the constructor signature is incompatible. Instead we 
could add a new constructor that initializes . This will also avoid changes to 
FileSystem, ViewFS, RawLocalFileSystem
# fsck should also print this new counter. We can do it in a separate Jira.
# Don't consider this a binding but I would really like it if {{bytesInFuture}} 
can be renamed especially where it is exposed via public interfaces/metrics. It 
sounds confusing/ominous. {{bytesWithFutureGenerationStamps}} would be more 
precise.

Still reviewing the test cases.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945939#comment-14945939
 ] 

Arpit Agarwal edited comment on HDFS-4015 at 10/6/15 10:28 PM:
---

Hi [~anu], thanks for addressing the earlier feedback. Feedback on the v2 patch.

# We will likely see blocks with future generation stamps during intentional 
HDFS rollback. We should disable this check if NN has been restarted with a 
rollback option (either regular or rolling upgrade rollback).
# I apologize for not noticing this earlier. {{FsStatus}} is tagged as public 
and stable, so changing the constructor signature is incompatible. Instead we 
could add a new constructor that initializes . This will also avoid changes to 
FileSystem, ViewFS, RawLocalFileSystem
# fsck should also print this new counter. We can do it in a separate Jira.
# Don't consider this a binding but I would really like it if {{bytesInFuture}} 
can be renamed especially where it is exposed via public interfaces/metrics. It 
sounds confusing/ominous. {{bytesWithFutureGenerationStamps}} would be more 
precise.

Still reviewing the test cases.


was (Author: arpitagarwal):
Hi [~anu], thanks for addressing the earlier feedback. Feedback on the v2 patch.

# We will likely see blocks with future generation stamps during HDFS rollback. 
We should disable this check if NN has been restarted with a rollback option 
(either regular or rolling upgrade rollback).
# I apologize for not noticing this earlier. {{FsStatus}} is tagged as public 
and stable, so changing the constructor signature is incompatible. Instead we 
could add a new constructor that initializes . This will also avoid changes to 
FileSystem, ViewFS, RawLocalFileSystem
# fsck should also print this new counter. We can do it in a separate Jira.
# Don't consider this a binding but I would really like it if {{bytesInFuture}} 
can be renamed especially where it is exposed via public interfaces/metrics. It 
sounds confusing/ominous. {{bytesWithFutureGenerationStamps}} would be more 
precise.

Still reviewing the test cases.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945939#comment-14945939
 ] 

Arpit Agarwal commented on HDFS-4015:
-

Hi [~anu], thanks for addressing the earlier feedback. Feedback on the v2 patch.

# We will likely see blocks with future generation stamps during HDFS rollback. 
We should disable this check if NN has been restarted with a rollback option 
(either regular or rolling upgrade rollback).
# I apologize for not noticing this earlier. {{FsStatus}} is tagged as public 
and stable, so changing the constructor signature is incompatible. Instead we 
could add a new constructor that initializes . This will also avoid changes to 
FileSystem, ViewFS, RawLocalFileSystem
# fsck should also print this new counter. We can do it in a separate Jira.
# Don't consider this a binding but I would really like it if {{bytesInFuture}} 
can be renamed especially where it is exposed via public interfaces/metrics. It 
sounds confusing/ominous. {{bytesWithFutureGenerationStamps}} would be more 
precise.

Still reviewing the test cases.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945928#comment-14945928
 ] 

Daniel Templeton commented on HDFS-9181:


Since the logging message doesn't actually provide anyone any useful 
information, I'd make it TRACE level.  No reason to add to the log clutter.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Status: Patch Available  (was: Open)

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Attachment: HDFS-9181.002.patch

Update the patch to fix code style issue.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Attachment: (was: HDFS-9181.001.patch)

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9204:

Description: This seems to be a regression caused by the merge of EC 
feature branch. 
{{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
when creating ReplicationWork.  (was: This seems to be a regression caused by 
merging from EC feature branch. 
{{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
when creating ReplicationWork.)

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-06 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-9204:
---

 Summary: DatanodeDescriptor#PendingReplicationWithoutTargets is 
wrongly calculated
 Key: HDFS-9204
 URL: https://issues.apache.org/jira/browse/HDFS-9204
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Mingliang Liu


This seems to be a regression caused by merging from EC feature branch. 
{{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-06 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-9182:
--
Attachment: HDFSS-9182.01.patch

Attached rebased patch, Thanks Jing, for review! 

> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945866#comment-14945866
 ] 

Hadoop QA commented on HDFS-9176:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   8m 35s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 40s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 31s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 22s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 241m  1s | Tests failed in hadoop-hdfs. |
| | | 266m 20s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestRenameWhileOpen |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.web.TestWebHdfsContentLength |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765200/HDFS-9176.001.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 874c8ed |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12812/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12812/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12812/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12812/console |


This message was automatically generated.

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9176:
---
Attachment: HDFS-9176.002.patch

I made one small change to (slightly) reduce the likelihood that the shutdown 
timing test will be bypassed.

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945841#comment-14945841
 ] 

Daniel Templeton commented on HDFS-9176:


I poked at it a bit, and it's not possible to reduce the schedule delay.  If it 
goes less than 2s, a different test fails because there wasn't enough time to 
let the waiting/running ratio stabilize.

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945781#comment-14945781
 ] 

Hudson commented on HDFS-9180:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #461 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/461/])
HDFS-9180. Update excluded DataNodes in DFSStripedOutputStream based on (jing9: 
rev a8b4d0ff283a0af1075aaa94904d4c6e63a9a3dd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java


> Update excluded DataNodes in DFSStripedOutputStream based on failures in data 
> streamers
> ---
>
> Key: HDFS-9180
> URL: https://issues.apache.org/jira/browse/HDFS-9180
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-9180.000.patch, HDFS-9180.001.patch, 
> HDFS-9180.002.patch, HDFS-9180.003.patch
>
>
> This is a TODO in HDFS-9040: based on the failures all the striped data 
> streamers hit, the DFSStripedOutputStream should keep a record of all the 
> DataNodes that should be excluded.
> This jira will also fix several bugs in the DFSStripedOutputStream. Will 
> provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945754#comment-14945754
 ] 

Hudson commented on HDFS-9180:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1225 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1225/])
HDFS-9180. Update excluded DataNodes in DFSStripedOutputStream based on (jing9: 
rev a8b4d0ff283a0af1075aaa94904d4c6e63a9a3dd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java


> Update excluded DataNodes in DFSStripedOutputStream based on failures in data 
> streamers
> ---
>
> Key: HDFS-9180
> URL: https://issues.apache.org/jira/browse/HDFS-9180
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-9180.000.patch, HDFS-9180.001.patch, 
> HDFS-9180.002.patch, HDFS-9180.003.patch
>
>
> This is a TODO in HDFS-9040: based on the failures all the striped data 
> streamers hit, the DFSStripedOutputStream should keep a record of all the 
> DataNodes that should be excluded.
> This jira will also fix several bugs in the DFSStripedOutputStream. Will 
> provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Attachment: HDFS-9181.001.patch

Adding a small patch to catch an Exception instead of a Throwable, and output a 
message.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.001.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945728#comment-14945728
 ] 

Hudson commented on HDFS-9180:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2400 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2400/])
HDFS-9180. Update excluded DataNodes in DFSStripedOutputStream based on (jing9: 
rev a8b4d0ff283a0af1075aaa94904d4c6e63a9a3dd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


> Update excluded DataNodes in DFSStripedOutputStream based on failures in data 
> streamers
> ---
>
> Key: HDFS-9180
> URL: https://issues.apache.org/jira/browse/HDFS-9180
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-9180.000.patch, HDFS-9180.001.patch, 
> HDFS-9180.002.patch, HDFS-9180.003.patch
>
>
> This is a TODO in HDFS-9040: based on the failures all the striped data 
> streamers hit, the DFSStripedOutputStream should keep a record of all the 
> DataNodes that should be excluded.
> This jira will also fix several bugs in the DFSStripedOutputStream. Will 
> provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945674#comment-14945674
 ] 

Hudson commented on HDFS-9180:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #495 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/495/])
HDFS-9180. Update excluded DataNodes in DFSStripedOutputStream based on (jing9: 
rev a8b4d0ff283a0af1075aaa94904d4c6e63a9a3dd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java


> Update excluded DataNodes in DFSStripedOutputStream based on failures in data 
> streamers
> ---
>
> Key: HDFS-9180
> URL: https://issues.apache.org/jira/browse/HDFS-9180
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-9180.000.patch, HDFS-9180.001.patch, 
> HDFS-9180.002.patch, HDFS-9180.003.patch
>
>
> This is a TODO in HDFS-9040: based on the failures all the striped data 
> streamers hit, the DFSStripedOutputStream should keep a record of all the 
> DataNodes that should be excluded.
> This jira will also fix several bugs in the DFSStripedOutputStream. Will 
> provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9196) TestWebHdfsContentLength fails on trunk

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945668#comment-14945668
 ] 

Hadoop QA commented on HDFS-9196:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  19m 44s | Pre-patch trunk has 7 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 21s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 17s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 47s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 16s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 31s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | |  50m 42s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765020/HDFS-9196.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 29a582a |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12815/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12815/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12815/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12815/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12815/console |


This message was automatically generated.

> TestWebHdfsContentLength fails on trunk
> ---
>
> Key: HDFS-9196
> URL: https://issues.apache.org/jira/browse/HDFS-9196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9196.001.patch
>
>
> {quote}
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 181.278 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
> testPutOp(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  Time elapsed: 
> 60.05 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOp(TestWebHdfsContentLength.java:116)
> testPutOpWithRedirect(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  
> Time elapsed: 0.01 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[chunked]> but was:<[0]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOpWithRedirect(TestWebHdfsContentLength.java:130)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9140) Discover conf parameters that need no NN/DN restart to make changes effective

2015-10-06 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9140:

Summary: Discover conf parameters that need no NN/DN restart to make 
changes effective  (was: Discover conf parameters that need NN/DN restart to 
make changes effective)

> Discover conf parameters that need no NN/DN restart to make changes effective
> -
>
> Key: HDFS-9140
> URL: https://issues.apache.org/jira/browse/HDFS-9140
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> This JIRA is to find those parameters that need NN/DN restart in order to 
> make the changes, otherwise, using admin facility API to reconfigure them. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4882) Prevent the Namenode's LeaseManager from looping forever in checkLeases

2015-10-06 Thread Venkata Ganji (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945658#comment-14945658
 ] 

Venkata Ganji commented on HDFS-4882:
-

Hello [~yzhangal], [~raviprak] ,Can you guys point me to a procedure for 
reproducing this issue at physical cluster level, please?

> Prevent the Namenode's LeaseManager from looping forever in checkLeases
> ---
>
> Key: HDFS-4882
> URL: https://issues.apache.org/jira/browse/HDFS-4882
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.0.0-alpha, 2.5.1
>Reporter: Zesheng Wu
>Assignee: Ravi Prakash
>Priority: Critical
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1
>
> Attachments: 4882.1.patch, 4882.patch, 4882.patch, HDFS-4882.1.patch, 
> HDFS-4882.2.patch, HDFS-4882.3.patch, HDFS-4882.4.patch, HDFS-4882.5.patch, 
> HDFS-4882.6.patch, HDFS-4882.7.patch, HDFS-4882.patch
>
>
> Scenario:
> 1. cluster with 4 DNs
> 2. the size of the file to be written is a little more than one block
> 3. write the first block to 3 DNs, DN1->DN2->DN3
> 4. all the data packets of first block is successfully acked and the client 
> sets the pipeline stage to PIPELINE_CLOSE, but the last packet isn't sent out
> 5. DN2 and DN3 are down
> 6. client recovers the pipeline, but no new DN is added to the pipeline 
> because of the current pipeline stage is PIPELINE_CLOSE
> 7. client continuously writes the last block, and try to close the file after 
> written all the data
> 8. NN finds that the penultimate block doesn't has enough replica(our 
> dfs.namenode.replication.min=2), and the client's close runs into indefinite 
> loop(HDFS-2936), and at the same time, NN makes the last block's state to 
> COMPLETE
> 9. shutdown the client
> 10. the file's lease exceeds hard limit
> 11. LeaseManager realizes that and begin to do lease recovery by call 
> fsnamesystem.internalReleaseLease()
> 12. but the last block's state is COMPLETE, and this triggers lease manager's 
> infinite loop and prints massive logs like this:
> {noformat}
> 2013-06-05,17:42:25,695 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Lease [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-1252656407_1, pendingcreates: 1] has expired hard
>  limit
> 2013-06-05,17:42:25,695 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering lease=[Lease. 
>  Holder: DFSClient_NONMAPREDUCE_-1252656407_1, pendingcreates: 1], src=
> /user/h_wuzesheng/test.dat
> 2013-06-05,17:42:25,695 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.internalReleaseLease: File = /user/h_wuzesheng/test.dat, block 
> blk_-7028017402720175688_1202597,
> lastBLockState=COMPLETE
> 2013-06-05,17:42:25,695 INFO 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Started block recovery 
> for file /user/h_wuzesheng/test.dat lease [Lease.  Holder: DFSClient_NONM
> APREDUCE_-1252656407_1, pendingcreates: 1]
> {noformat}
> (the 3rd line log is a debug log added by us)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945605#comment-14945605
 ] 

Hudson commented on HDFS-9180:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2430 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2430/])
HDFS-9180. Update excluded DataNodes in DFSStripedOutputStream based on (jing9: 
rev a8b4d0ff283a0af1075aaa94904d4c6e63a9a3dd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java


> Update excluded DataNodes in DFSStripedOutputStream based on failures in data 
> streamers
> ---
>
> Key: HDFS-9180
> URL: https://issues.apache.org/jira/browse/HDFS-9180
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-9180.000.patch, HDFS-9180.001.patch, 
> HDFS-9180.002.patch, HDFS-9180.003.patch
>
>
> This is a TODO in HDFS-9040: based on the failures all the striped data 
> streamers hit, the DFSStripedOutputStream should keep a record of all the 
> DataNodes that should be excluded.
> This jira will also fix several bugs in the DFSStripedOutputStream. Will 
> provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9000) Allow reconfiguration without restart for parameters where applicable.

2015-10-06 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945577#comment-14945577
 ] 

Colin Patrick McCabe commented on HDFS-9000:


[~wheat9], I agree that supporting reconfiguration for every key would be 
infeasible.  The approach we've used in the DataNode is to make only certain 
configuration keys reconfigurable.  It seems to have worked pretty well.  Given 
that we have multi-minute NameNode startup times in some cases, restarting the 
NameNode doesn't seem like a viable option here.

> Allow reconfiguration without restart for parameters where applicable.
> --
>
> Key: HDFS-9000
> URL: https://issues.apache.org/jira/browse/HDFS-9000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Many parameters can be re-configured without requiring a restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager

2015-10-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945578#comment-14945578
 ] 

Jing Zhao commented on HDFS-9129:
-

The patch looks good overall. Some early comments:
# The jira currently divides the original safemode into several different 
parts: manual safemode, safemode caused by low resource, and safemode related 
to blocks and datanodes. It will be helpful for others to understand the change 
if you can post a more detailed design about the approach.
# Instead of dumping all the safemodeInfo details directly into BlockManager, 
it's better to have a standalone class for SafeModeInfo in blockmanagement 
package.
# The {{FSNamesystem#leaveSafeMode}} method can delegate the operation to the 
BlockManager's safemode instead of directly operating on BlockManager's 
internal states.

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9196) TestWebHdfsContentLength fails on trunk

2015-10-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9196:

Status: Patch Available  (was: Open)

> TestWebHdfsContentLength fails on trunk
> ---
>
> Key: HDFS-9196
> URL: https://issues.apache.org/jira/browse/HDFS-9196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9196.001.patch
>
>
> {quote}
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 181.278 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
> testPutOp(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  Time elapsed: 
> 60.05 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOp(TestWebHdfsContentLength.java:116)
> testPutOpWithRedirect(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  
> Time elapsed: 0.01 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[chunked]> but was:<[0]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOpWithRedirect(TestWebHdfsContentLength.java:130)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-06 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-4015:
---
Attachment: HDFS-4015.002.patch

re-based the patch to top of the tree, used same patch number

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945538#comment-14945538
 ] 

Hudson commented on HDFS-9180:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #486 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/486/])
HDFS-9180. Update excluded DataNodes in DFSStripedOutputStream based on (jing9: 
rev a8b4d0ff283a0af1075aaa94904d4c6e63a9a3dd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java


> Update excluded DataNodes in DFSStripedOutputStream based on failures in data 
> streamers
> ---
>
> Key: HDFS-9180
> URL: https://issues.apache.org/jira/browse/HDFS-9180
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-9180.000.patch, HDFS-9180.001.patch, 
> HDFS-9180.002.patch, HDFS-9180.003.patch
>
>
> This is a TODO in HDFS-9040: based on the failures all the striped data 
> streamers hit, the DFSStripedOutputStream should keep a record of all the 
> DataNodes that should be excluded.
> This jira will also fix several bugs in the DFSStripedOutputStream. Will 
> provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9114) NameNode and DataNode metric log file name should follow the other log file name format.

2015-10-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945516#comment-14945516
 ] 

Allen Wittenauer commented on HDFS-9114:


I've brought up a discussion on hdfs-dev@ since this covers a bunch of diverse 
JIRAs.

Depending upon what sort of consensus is reached, I have different feedback for 
different scenarios.

> NameNode and DataNode metric log file name should follow the other log file 
> name format.
> 
>
> Key: HDFS-9114
> URL: https://issues.apache.org/jira/browse/HDFS-9114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9114-branch-2.01.patch, 
> HDFS-9114-branch-2.02.patch, HDFS-9114-trunk.01.patch, 
> HDFS-9114-trunk.02.patch
>
>
> Currently datanode and namenode metric log file name is 
> {{datanode-metrics.log}} and {{namenode-metrics.log}}.
> This file name should be like {{hadoop-hdfs-namenode-metric-host192.log}} 
> same as namenode log file {{hadoop-hdfs-namenode-host192.log}}.
> This will help when we will copy log for issue analysis from different node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9202) Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem

2015-10-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9202:

Attachment: HDFS-9202.01.patch

Attached patch to add Deprecated keys for client in HdfsConfigurationLoader.

Added a new test class itself, as loading happens in static block, test need to 
verify n forked JVM.

Please review.

> Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem
> -
>
> Key: HDFS-9202
> URL: https://issues.apache.org/jira/browse/HDFS-9202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Bibin A Chundatt
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-9202.01.patch
>
>
> Deprecation keys are not taken case in case  
> hadoop-hdfs-client#DistributedFileSystem.
> Client side deprecated keys are not usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9202) Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem

2015-10-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9202:

Status: Patch Available  (was: Open)

> Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem
> -
>
> Key: HDFS-9202
> URL: https://issues.apache.org/jira/browse/HDFS-9202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Bibin A Chundatt
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-9202.01.patch
>
>
> Deprecation keys are not taken case in case  
> hadoop-hdfs-client#DistributedFileSystem.
> Client side deprecated keys are not usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9199) rename dfs.namenode.replication.min to dfs.replication.min

2015-10-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945497#comment-14945497
 ] 

Mingliang Liu commented on HDFS-9199:
-

# The name {{dfs.namenode.replication.min}} was chosen deliberately. 
{{dfs.namenode.replication.min}} was designed to replace the 
{{dfs.replication.min}} config key in {{HdfsConfiguration.java}}
{code}
118  new DeprecationDelta("dfs.replication.min",
119DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY),
{code}
For this reason, the {{dfs.namenode.replication.min}} is preferred to 
{{dfs.replication.min}}.
# Though both of the {{dfs.namenode.replication.min}} and 
{{dfs.replication.max}} are for replication numbers threshold, they are used in 
different scenarios. {{dfs.replication.max}} is like a general purpose 
whole-system maximum replication hard limit, while the 
{{dfs.namenode.replication.min}} is to check how many minimum copies are 
needed, or else write is disallowed. The latter is valid only in namenode scope 
so the _namenode_ prefix is just fine.

For those two reasons, I think we don't need to rename 
{{dfs.namenode.replication.min}} to {{dfs.replication.min}}.

> rename dfs.namenode.replication.min to dfs.replication.min
> --
>
> Key: HDFS-9199
> URL: https://issues.apache.org/jira/browse/HDFS-9199
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Mingliang Liu
>
> dfs.namenode.replication.min should be dfs.replication.min to match the other 
> dfs.replication config knobs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945479#comment-14945479
 ] 

Hudson commented on HDFS-9180:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8576 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8576/])
HDFS-9180. Update excluded DataNodes in DFSStripedOutputStream based on (jing9: 
rev a8b4d0ff283a0af1075aaa94904d4c6e63a9a3dd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReadStripedFileWithDecoding.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java


> Update excluded DataNodes in DFSStripedOutputStream based on failures in data 
> streamers
> ---
>
> Key: HDFS-9180
> URL: https://issues.apache.org/jira/browse/HDFS-9180
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-9180.000.patch, HDFS-9180.001.patch, 
> HDFS-9180.002.patch, HDFS-9180.003.patch
>
>
> This is a TODO in HDFS-9040: based on the failures all the striped data 
> streamers hit, the DFSStripedOutputStream should keep a record of all the 
> DataNodes that should be excluded.
> This jira will also fix several bugs in the DFSStripedOutputStream. Will 
> provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945478#comment-14945478
 ] 

Jing Zhao commented on HDFS-9182:
-

Thanks for all the fix, [~umamaheswararao]! The patch may need some rebase. 
Other than that looks good to me.

> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945468#comment-14945468
 ] 

Daniel Templeton commented on HDFS-9176:


Once the block scanner has run, there's no point in waiting any more.  The 
threads that shutdown() would stop are already stopped.

I have the throttle set to 1ms, which means that only one directory will be 
scanned per second.  The idea is to slow the process down as much as possible.  
I'm guessing that there are sometimes fewer directories than 2x the number of 
threads, resulting in the scan completing early.  There's no way to slow it 
down any more than that.

I suppose one additional way to increase the likelihood of having the 
shutdown() thread run would be to reduce the scheduling delay to 1s or 500ms.  
Still no guarantee, but it would increase the odds.

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9180) Update excluded DataNodes in DFSStripedOutputStream based on failures in data streamers

2015-10-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9180:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks again for the review, [~hitliuyi] and [~walter.k.su]! I've committed the 
patch to trunk.

> Update excluded DataNodes in DFSStripedOutputStream based on failures in data 
> streamers
> ---
>
> Key: HDFS-9180
> URL: https://issues.apache.org/jira/browse/HDFS-9180
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0
>
> Attachments: HDFS-9180.000.patch, HDFS-9180.001.patch, 
> HDFS-9180.002.patch, HDFS-9180.003.patch
>
>
> This is a TODO in HDFS-9040: based on the failures all the striped data 
> streamers hit, the DFSStripedOutputStream should keep a record of all the 
> DataNodes that should be excluded.
> This jira will also fix several bugs in the DFSStripedOutputStream. Will 
> provide more details in the comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945448#comment-14945448
 ] 

Lei (Eddy) Xu edited comment on HDFS-9176 at 10/6/15 5:47 PM:
--

[~templedf] thanks a lot for the patch.

One question:

{code}
// If the scan didn't complete before the shutdown was run, check
// that the shutdown was timely
if (finalMs > 0) {
LOG.info("Scanner took " + (Time.monotonicNow() - finalMs)
 + "ms to shutdown");
assertTrue("Scanner took too long to shutdown",
   Time.monotonicNow() - finalMs < 1000L);
}
{code}

Because {{masterThread}} and {{reportCompileThreadPool}} will be terminated 
within 1 minutes, 
should we wait and retry several times here so that the above code has higher 
chance to be executed? 


was (Author: eddyxu):
[~templedf] thanks a lot for the patch.

One question:

{code}
// If the scan didn't complete before the shutdown was run, check
// that the shutdown was timely
if (finalMs > 0) {
LOG.info("Scanner took " + (Time.monotonicNow() - finalMs)
 + "ms to shutdown");
assertTrue("Scanner took too long to shutdown",
   Time.monotonicNow() - finalMs < 1000L);
}
{code}

Because {{masterThread}} and {{reportCompileThreadPool}} will be terminated 
within 1 minutes, 
should we wait and retries several times here so that the above code has higher 
chance to be executed? 

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945448#comment-14945448
 ] 

Lei (Eddy) Xu commented on HDFS-9176:
-

[~templedf] thanks a lot for the patch.

One question:

{code}
// If the scan didn't complete before the shutdown was run, check
// that the shutdown was timely
if (finalMs > 0) {
LOG.info("Scanner took " + (Time.monotonicNow() - finalMs)
 + "ms to shutdown");
assertTrue("Scanner took too long to shutdown",
   Time.monotonicNow() - finalMs < 1000L);
}
{code}

Because {{masterThread}} and {{reportCompileThreadPool}} will be terminated 
within 1 minutes, 
should we wait and retries several times here so that the above code has higher 
chance to be executed? 

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8850) VolumeScanner thread exits with exception if there is no block pool to be scanned but there are suspicious blocks

2015-10-06 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945437#comment-14945437
 ] 

Colin Patrick McCabe commented on HDFS-8850:


I think this makes sense for 2.7.2.  Thanks, all.

> VolumeScanner thread exits with exception if there is no block pool to be 
> scanned but there are suspicious blocks
> -
>
> Key: HDFS-8850
> URL: https://issues.apache.org/jira/browse/HDFS-8850
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0, 2.7.2
>
> Attachments: HDFS-8850.001.patch
>
>
> The VolumeScanner threads inside the BlockScanner exit with an exception if 
> there is no block pool to be scanned but there are suspicious blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9202) Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem

2015-10-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945427#comment-14945427
 ] 

Mingliang Liu commented on HDFS-9202:
-

Thanks for filing this. The [HDFS-8740] was not aware of the deprecated keys. I 
agree with [~vinayrpet] that we may need to add deprecated keys in client 
module that are related only to clients.

As the {{HdfsConfigurationLoader}} is a replacement of {{HdfsConfiguration}} to 
load default resources, I think we can add the deprecated keys there, following 
the paradigms in {{HdfsConfiguration}}. {{HdfsConfiguration}} itself should not 
be moved to {{hadoop-hdfs-client}} module, IMHO, as it adds deprecated keys in 
server side.

> Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem
> -
>
> Key: HDFS-9202
> URL: https://issues.apache.org/jira/browse/HDFS-9202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Bibin A Chundatt
>Assignee: Vinayakumar B
>Priority: Critical
>
> Deprecation keys are not taken case in case  
> hadoop-hdfs-client#DistributedFileSystem.
> Client side deprecated keys are not usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945378#comment-14945378
 ] 

Hadoop QA commented on HDFS-9159:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 18s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 55s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 21s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 25s | The applied patch generated  4 
new checkstyle issues (total was 37, now 40). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 199m  1s | Tests failed in hadoop-hdfs. |
| | | 244m 58s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.web.TestWebHdfsContentLength |
| Timed out tests | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765173/HDFS-9159_03.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 874c8ed |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12811/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12811/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12811/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12811/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12811/console |


This message was automatically generated.

> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-06 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-4015:
---
Attachment: (was: dfsHealth.html.message.png)

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-06 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-4015:
---
Attachment: (was: dfsAdmin-report_with_forceExit.png)

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-06 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-4015:
---
Attachment: (was: HDFS-4015.002.patch)

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9176:
---
Attachment: HDFS-9176.001.patch

Made patch a little more robust.

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-06 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9176:
---
Attachment: (was: HDFS-9176.001.patch)

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945274#comment-14945274
 ] 

Kihwal Lee commented on HDFS-9181:
--

bq. An uncaught Error will generally bring the whole thing down anyway
It will bring down the thread, but not the datanode. If that happens, the 
datanode will probably be in a half-deaf state.  In any case, catching 
{{Exception}} should be sufficient.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7899) Improve EOF error message

2015-10-06 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945253#comment-14945253
 ] 

Harsh J commented on HDFS-7899:
---

Thank you [~jagadesh.kiran] and [~vinayrpet]

> Improve EOF error message
> -
>
> Key: HDFS-7899
> URL: https://issues.apache.org/jira/browse/HDFS-7899
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Jagadesh Kiran N
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-7899-00.patch, HDFS-7899-01.patch, 
> HDFS-7899-02.patch
>
>
> Currently, a DN disconnection for reasons other than connection timeout or 
> refused messages, such as an EOF message as a result of rejection or other 
> network fault, reports in this manner:
> {code}
> WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x: for 
> block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no 
> length prefix available 
> java.io.EOFException: Premature EOF: no length prefix available 
> at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
>  
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392)
>  
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
>  
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538) 
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794) 
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:602) 
> {code}
> This is not very clear to a user (warn's at the hdfs-client). It could likely 
> be improved with a more diagnosable message, or at least the direct reason 
> than an EOF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945159#comment-14945159
 ] 

Yongjun Zhang commented on HDFS-9181:
-

Hi Guys,

Since HDFS-7533 tries to catch NPE:
{code}
When datanode is told to shutdown via the dfsadmin command during rolling 
upgrade, it may not shutdown. This is because not all writers have responder 
running, but sendOOB() tries anyway. This causes NPE and the shutdown thread 
dies, halting the shutdown after only shutting down DataXceiverServer.
{code}

I'd think that we just change the Throwable to Exception as Daniel suggested, 
and issue a warning message with the exception's information. For any other 
thing thrown from the sendOOB step, the behavior will be just like before 
HDFS-7533.

Thanks.


> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7899) Improve EOF error message

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945148#comment-14945148
 ] 

Hudson commented on HDFS-7899:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2399 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2399/])
HDFS-7899. Improve EOF error message (Contributed by Jagadesh Kiran N) 
(vinayakumarb: rev 874c8ed2399ff5f760d358abae3e98c013f48d22)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java


> Improve EOF error message
> -
>
> Key: HDFS-7899
> URL: https://issues.apache.org/jira/browse/HDFS-7899
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Jagadesh Kiran N
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-7899-00.patch, HDFS-7899-01.patch, 
> HDFS-7899-02.patch
>
>
> Currently, a DN disconnection for reasons other than connection timeout or 
> refused messages, such as an EOF message as a result of rejection or other 
> network fault, reports in this manner:
> {code}
> WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x: for 
> block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no 
> length prefix available 
> java.io.EOFException: Premature EOF: no length prefix available 
> at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
>  
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392)
>  
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
>  
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538) 
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794) 
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:602) 
> {code}
> This is not very clear to a user (warn's at the hdfs-client). It could likely 
> be improved with a more diagnosable message, or at least the direct reason 
> than an EOF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9201) Namenode Performance Improvement : Using for loop without iterator

2015-10-06 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945141#comment-14945141
 ] 

Rushabh S Shah commented on HDFS-9201:
--

[~nijel]: I think this jira is about extra garbage created with iterator.
Can you please post the statistics about  the reduction in garbage ?

> Namenode Performance Improvement : Using for loop without iterator
> --
>
> Key: HDFS-9201
> URL: https://issues.apache.org/jira/browse/HDFS-9201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>  Labels: namenode, performance
> Attachments: HDFS-9201_draft.patch
>
>
> As discussed in HBASE-12023, the for each loop syntax will create few extra 
> objects and garbage.
> For arrays and Lists can change to the traditional syntax. 
> This can improve memory foot print and can result in performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7899) Improve EOF error message

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945132#comment-14945132
 ] 

Hudson commented on HDFS-7899:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #460 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/460/])
HDFS-7899. Improve EOF error message (Contributed by Jagadesh Kiran N) 
(vinayakumarb: rev 874c8ed2399ff5f760d358abae3e98c013f48d22)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Improve EOF error message
> -
>
> Key: HDFS-7899
> URL: https://issues.apache.org/jira/browse/HDFS-7899
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Jagadesh Kiran N
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-7899-00.patch, HDFS-7899-01.patch, 
> HDFS-7899-02.patch
>
>
> Currently, a DN disconnection for reasons other than connection timeout or 
> refused messages, such as an EOF message as a result of rejection or other 
> network fault, reports in this manner:
> {code}
> WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x: for 
> block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no 
> length prefix available 
> java.io.EOFException: Premature EOF: no length prefix available 
> at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
>  
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392)
>  
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
>  
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538) 
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794) 
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:602) 
> {code}
> This is not very clear to a user (warn's at the hdfs-client). It could likely 
> be improved with a more diagnosable message, or at least the direct reason 
> than an EOF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9195) TestDelegationTokenForProxyUser.testWebHdfsDoAs fails on trunk

2015-10-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945130#comment-14945130
 ] 

Hadoop QA commented on HDFS-9195:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   7m 53s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 24s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 195m 40s | Tests failed in hadoop-hdfs. |
| | | 219m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.web.TestWebHdfsContentLength |
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
 |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
|   | org.apache.hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765161/HDFS-9195.001.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 874c8ed |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12810/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12810/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12810/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12810/console |


This message was automatically generated.

> TestDelegationTokenForProxyUser.testWebHdfsDoAs fails on trunk
> --
>
> Key: HDFS-9195
> URL: https://issues.apache.org/jira/browse/HDFS-9195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: ramtin
> Attachments: HDFS-9195.001.patch
>
>
> {quote}
> testWebHdfsDoAs(org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser)
>   Time elapsed: 1.299 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...ocalhost:44528/user/[Proxy]User> 
> but was:<...ocalhost:44528/user/[Real]User>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:163)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945100#comment-14945100
 ] 

Wei-Chiu Chuang commented on HDFS-9181:
---

Thanks to [~templedf] and [~eepayne] for the discussion.
Catching Throwable is generally regarded as an undesirable practice (See 
http://stackoverflow.com/questions/17333925/is-it-ok-to-catch-throwable-for-performing-cleanup
 and 
http://stackoverflow.com/questions/6083248/is-it-a-bad-practice-to-catch-throwable)

My only question is, if we go for catching the Exception, is it ok to silently 
ignore it? 

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9181:
--
Description: 
Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
better if the exception is handled in some other ways.

One way to handle it is by emitting a warning message. There could exist other 
ways to handle it. This jira is created to discuss how to handle this case 
better.

Thanks to [~templedf] for bringing this up.

  was:
Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
better if the exception is handled in some other ways.

One way to handle it is by emitting a warning message. There could exist other 
ways to handle it. This lira is created to discuss how to handle this case 
better.

Thanks to [~templedf] for bringing this up.


> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9203) Add overwrite arg for DFS Get

2015-10-06 Thread Keegan Witt (JIRA)
Keegan Witt created HDFS-9203:
-

 Summary: Add overwrite arg for DFS Get
 Key: HDFS-9203
 URL: https://issues.apache.org/jira/browse/HDFS-9203
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Keegan Witt


I think it'd be good to add an argument to specify that the local file be 
overwritten if it exists when doing a DFS Get operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7899) Improve EOF error message

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945009#comment-14945009
 ] 

Hudson commented on HDFS-7899:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #485 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/485/])
HDFS-7899. Improve EOF error message (Contributed by Jagadesh Kiran N) 
(vinayakumarb: rev 874c8ed2399ff5f760d358abae3e98c013f48d22)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java


> Improve EOF error message
> -
>
> Key: HDFS-7899
> URL: https://issues.apache.org/jira/browse/HDFS-7899
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Jagadesh Kiran N
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-7899-00.patch, HDFS-7899-01.patch, 
> HDFS-7899-02.patch
>
>
> Currently, a DN disconnection for reasons other than connection timeout or 
> refused messages, such as an EOF message as a result of rejection or other 
> network fault, reports in this manner:
> {code}
> WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x: for 
> block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no 
> length prefix available 
> java.io.EOFException: Premature EOF: no length prefix available 
> at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
>  
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392)
>  
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
>  
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538) 
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794) 
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:602) 
> {code}
> This is not very clear to a user (warn's at the hdfs-client). It could likely 
> be improved with a more diagnosable message, or at least the direct reason 
> than an EOF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-06 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9159:

Attachment: HDFS-9159_03.patch

thanks [~vshreyas] for your time
Updated the patch to fix the comments
Please have a look.


> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-06 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945000#comment-14945000
 ] 

nijel commented on HDFS-9159:
-

sorry the name changed !!
Thanks [~vinayrpet] for the review

> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7899) Improve EOF error message

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14944981#comment-14944981
 ] 

Hudson commented on HDFS-7899:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2429 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2429/])
HDFS-7899. Improve EOF error message (Contributed by Jagadesh Kiran N) 
(vinayakumarb: rev 874c8ed2399ff5f760d358abae3e98c013f48d22)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java


> Improve EOF error message
> -
>
> Key: HDFS-7899
> URL: https://issues.apache.org/jira/browse/HDFS-7899
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Jagadesh Kiran N
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-7899-00.patch, HDFS-7899-01.patch, 
> HDFS-7899-02.patch
>
>
> Currently, a DN disconnection for reasons other than connection timeout or 
> refused messages, such as an EOF message as a result of rejection or other 
> network fault, reports in this manner:
> {code}
> WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x: for 
> block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no 
> length prefix available 
> java.io.EOFException: Premature EOF: no length prefix available 
> at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
>  
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392)
>  
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
>  
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538) 
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794) 
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:602) 
> {code}
> This is not very clear to a user (warn's at the hdfs-client). It could likely 
> be improved with a more diagnosable message, or at least the direct reason 
> than an EOF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9201) Namenode Performance Improvement : Using for loop without iterator

2015-10-06 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-9201:

Attachment: HDFS-9201_draft.patch

Had a try with NNThroughputBenchmark for read and write flow in a 40 core 
machines with few changes in core flow.
config : -threads 200 -files 50 -filesPerDir 100

Results (in ops per second)
||Read ~ 5% improvement observed||
|| trial || Without change || After the change |
| trail 1 | 187336 |  198886 |
| trail 2 | 181752 | 200642 | 
| trail 3 | 195388 | 200964 | 
|  |  |  | 
||Write - No change in write flow||
| trail 1 | 29585 | 29330 |
| trail 2 | 29670 | 29577 | 
| trail 3 | 29584 | 29670 | 

Attached the draft patch with the changes used for test
Please give your opinion

> Namenode Performance Improvement : Using for loop without iterator
> --
>
> Key: HDFS-9201
> URL: https://issues.apache.org/jira/browse/HDFS-9201
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: nijel
>Assignee: nijel
>  Labels: namenode, performance
> Attachments: HDFS-9201_draft.patch
>
>
> As discussed in HBASE-12023, the for each loop syntax will create few extra 
> objects and garbage.
> For arrays and Lists can change to the traditional syntax. 
> This can improve memory foot print and can result in performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9202) Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem

2015-10-06 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14944928#comment-14944928
 ] 

Vinayakumar B commented on HDFS-9202:
-

This issue is after HDFS-8740.

{code}
   static{
-HdfsConfiguration.init();
+HdfsConfigurationLoader.init();
   }
{code}
Above change in DistributedFileSystem.java resulted not adding deprecated keys.

There is a need of moving HdfsConfiguration.java to hadoop-hdfs-client module. 
or Having a separate similar class for client module which adds deprecated keys 
which are related to only clients.

> Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem
> -
>
> Key: HDFS-9202
> URL: https://issues.apache.org/jira/browse/HDFS-9202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Bibin A Chundatt
>Assignee: Vinayakumar B
>Priority: Critical
>
> Deprecation keys are not taken case in case  
> hadoop-hdfs-client#DistributedFileSystem.
> Client side deprecated keys are not usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7899) Improve EOF error message

2015-10-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14944926#comment-14944926
 ] 

Hudson commented on HDFS-7899:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1224 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1224/])
HDFS-7899. Improve EOF error message (Contributed by Jagadesh Kiran N) 
(vinayakumarb: rev 874c8ed2399ff5f760d358abae3e98c013f48d22)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java


> Improve EOF error message
> -
>
> Key: HDFS-7899
> URL: https://issues.apache.org/jira/browse/HDFS-7899
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Jagadesh Kiran N
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-7899-00.patch, HDFS-7899-01.patch, 
> HDFS-7899-02.patch
>
>
> Currently, a DN disconnection for reasons other than connection timeout or 
> refused messages, such as an EOF message as a result of rejection or other 
> network fault, reports in this manner:
> {code}
> WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /x.x.x.x: for 
> block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no 
> length prefix available 
> java.io.EOFException: Premature EOF: no length prefix available 
> at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
>  
> at 
> org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392)
>  
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:137)
>  
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1103)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:538) 
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:750)
>  
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:794) 
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:602) 
> {code}
> This is not very clear to a user (warn's at the hdfs-client). It could likely 
> be improved with a more diagnosable message, or at least the direct reason 
> than an EOF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9202) Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem

2015-10-06 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated HDFS-9202:
---
Component/s: hdfs-client

> Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem
> -
>
> Key: HDFS-9202
> URL: https://issues.apache.org/jira/browse/HDFS-9202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Bibin A Chundatt
>Assignee: Vinayakumar B
>Priority: Critical
>
> Deprecation keys are not taken case in case  
> hadoop-hdfs-client#DistributedFileSystem.
> Client side deprecated keys are not usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9202) Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem

2015-10-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reassigned HDFS-9202:
---

Assignee: Vinayakumar B

> Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem
> -
>
> Key: HDFS-9202
> URL: https://issues.apache.org/jira/browse/HDFS-9202
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Vinayakumar B
>Priority: Critical
>
> Deprecation keys are not taken case in case  
> hadoop-hdfs-client#DistributedFileSystem.
> Client side deprecated keys are not usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9202) Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem

2015-10-06 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created HDFS-9202:
--

 Summary: Deprecation cases are not handled in 
hadoop-hdfs-client#DistributedFileSystem
 Key: HDFS-9202
 URL: https://issues.apache.org/jira/browse/HDFS-9202
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bibin A Chundatt
Priority: Critical


Deprecation keys are not taken case in case  
hadoop-hdfs-client#DistributedFileSystem.
Client side deprecated keys are not usable.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9201) Namenode Performance Improvement : Using for loop without iterator

2015-10-06 Thread nijel (JIRA)
nijel created HDFS-9201:
---

 Summary: Namenode Performance Improvement : Using for loop without 
iterator
 Key: HDFS-9201
 URL: https://issues.apache.org/jira/browse/HDFS-9201
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: nijel
Assignee: nijel


As discussed in HBASE-12023, the for each loop syntax will create few extra 
objects and garbage.

For arrays and Lists can change to the traditional syntax. 
This can improve memory foot print and can result in performance gain.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >