[jira] [Updated] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2016-12-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10860:
--
Status: In Progress  (was: Patch Available)

Looking into the test failures caused by my {{AuthenticationFilter}} change.

> Switch HttpFS from Tomcat to Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HDFS-10860.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15744156#comment-15744156
 ] 

Hadoop QA commented on HDFS-7859:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
1197 unchanged - 2 fixed = 1199 total (was 1199) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
31s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Class 
org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy 
defines non-transient non-serializable instance field condition  In 
ReplaceDatanodeOnFailure.java:instance field condition  In 
ReplaceDatanodeOnFailure.java |
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-7859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842725/HDFS-7859.008.patch |
| Optional T

[jira] [Commented] (HDFS-10958) Add instrumentation hooks around Datanode disk IO

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15744149#comment-15744149
 ] 

Hadoop QA commented on HDFS-10958:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 47s{color} | {color:orange} root: The patch generated 6 new + 1160 unchanged 
- 10 fixed = 1166 total (was 1170) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  7s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
|   | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
|   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10958 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842913/HDFS-10958.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f9c3246c405c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personali

[jira] [Updated] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11233:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix javac warnings related to the deprecated APIs after upgrading Jackson
> -
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11233-branch-2.001.patch, HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15744111#comment-15744111
 ] 

Akira Ajisaka commented on HDFS-11233:
--

Thanks [~linyiqun] for the comment and the patch.
branch-2 are using Jackson 1 and 2 at the same time, so I thought probably 
branch-2 is using the deprecated APIs of Jackson 2. Therefore I asked you to 
provide a patch. However, branch-2 is actually using Jackson 1 APIs and is not 
using deprecated APIs of Jackson 2. I don't want to upgrade from Jackson1 to 
Jackson2 in this jira, so I'd like to close this issue. Thanks a lot!

> Fix javac warnings related to the deprecated APIs after upgrading Jackson
> -
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11233-branch-2.001.patch, HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11236) Erasure Coding cann't support appendToFile

2016-12-12 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15744097#comment-15744097
 ] 

Takanobu Asanuma commented on HDFS-11236:
-

Thanks for creating this jira, [~gehaijiang], and thanks for the comment, 
[~yuanbo]. This may duplicate of HDFS-7663.

Appending operation depends on hflush/hsync and it has been being implemented 
and discussed in HDFS-7661. But it is determined as unsupported feature for 3.0.

> Erasure Coding cann't support appendToFile
> --
>
> Key: HDFS-11236
> URL: https://issues.apache.org/jira/browse/HDFS-11236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: gehaijiang
>
> hadoop 3.0.0-alpha1
> $  hdfs erasurecode -getPolicy /ectest/workers
> ErasureCodingPolicy=[Name=RS-DEFAULT-6-3-64k, 
> Schema=[ECSchema=[Codec=rs-default, numDataUnits=6, numParityUnits=3]], 
> CellSize=65536 ]
> $  hadoop fs  -appendToFile  hadoop/etc/hadoop/httpfs-env.sh  /ectest/workers
> appendToFile: Cannot append to files with striped block /ectest/workers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11239) [SPS]: Check Mover file ID lease also to determine whether Mover is running

2016-12-12 Thread Wei Zhou (JIRA)
Wei Zhou created HDFS-11239:
---

 Summary: [SPS]: Check Mover file ID lease also to determine 
whether Mover is running
 Key: HDFS-11239
 URL: https://issues.apache.org/jira/browse/HDFS-11239
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Wei Zhou
Assignee: Wei Zhou


Currently in SPS only checks the Mover ID file existence to determine whether a 
Mover is running, this can be an issue when Mover exists unexpected without 
deleting the ID file,  and this further stops SPS to function. This is a 
following on to HDFS-10885 and there we bypassed this due to some 
implementation problems.  This issue can be fixed after HDFS-11123.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15744021#comment-15744021
 ] 

Hadoop QA commented on HDFS-11233:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs-project in the patch failed with JDK 
v1.8.0_111. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-hdfs-project in the patch failed with JDK v1.8.0_111. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-project in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-hdfs-project in the patch failed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client-jdk1.8.0_111 
with JDK v1.8.0_111 generated 16 new + 0 unchanged - 0 fixed = 16 total (was 0) 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_111 with JDK 
v1.8.0_111 generated 12 new + 7 unchanged - 0 fixed = 19 total (was 7) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client-jdk1.7.0_121 
with JDK v1.7.0_121 generated 16 new + 0 un

[jira] [Commented] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy

2016-12-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15744015#comment-15744015
 ] 

Rakesh R commented on HDFS-11193:
-

Thanks [~umamaheswararao] for the useful review comments. Following are the 
changes in the new patch, kindly take another look at the latest patch.

* I've fixed 1,2 comments.
* While testing I found an issue in {{when there is no target node with the 
required storage type}} logic. For example, I have a block with locations 
A(disk), B(disk), C(disk) and assume only A, B and C are live nodes with A & C 
have archive storage type. Again assume, user changed the storage policy to 
{{COLD}}. Now, SPS internally starts preparing the src-target pairing like, 
{{src=> (A, B, C) and target=> (A, C)}}. Its skipping B as it doesn't have 
archive media and this is an indication that SPS should do retries for 
satisfying all of its block locations. On the other side, coordinator will pair 
the src-target node for actual physical movement like, {{movetask=> (A, A), (B, 
C)}}. Here ideally it should do (C, C) instead of (B, C) but mistakenly 
choosing the source C. I think, the implicit assumptions of retry needed will 
create confusions and coding mistakes like this. In this patch, I've created a 
new flag {{retryNeeded}} flag to make it more readable. Now, SPS will prepare 
only the matching pair and dummy source slots will be avoided like, {{src=> (A, 
C) and target=> (A, C)}} and set retryNeeded=true to convey the message that 
this trackId has only partial blocks movements.
* Added one more test for ec striped block.

bq. One another idea in my mind is that, how about just including blockIndexes 
in the case of Striped?
Thanks for this idea. Following is my analysis on this approach. As we know, 
presently NN is passing simple {{Block}} objects to the coordinator datanode 
for movement. Inorder to do the internal block constrcution at the DN side, it 
requires the BlockInfoStriped complex object and the blockIndices array. I 
think passing list of simple object is better compare to the complex object, 
this will keep all the computation complexities at the SPS side and makes the 
coordinator logic more readable. I'd prefer to keep the internal block 
constrcution logic at the NN side. Does this make sense to you?
{code}
+// construct internal block
+long blockId = blockInfo.getBlockId() + si.getBlockIndex();
+long numBytes = StripedBlockUtil.getInternalBlockLength(
+sBlockInfo.getNumBytes(), sBlockInfo.getCellSize(),
+sBlockInfo.getDataBlockNum(), si.getBlockIndex());
+Block blk = new Block(blockId, numBytes,
+blockInfo.getGenerationStamp());
{code}

> [SPS]: Erasure coded files should be considered for satisfying storage policy
> -
>
> Key: HDFS-11193
> URL: https://issues.apache.org/jira/browse/HDFS-11193
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11193-HDFS-10285-00.patch, 
> HDFS-11193-HDFS-10285-01.patch, HDFS-11193-HDFS-10285-02.patch
>
>
> Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. 
> {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider 
> all immediate files under that directory and need to check that, the files 
> really matching with namespace storage policy. All the mismatched striped 
> blocks should be chosen for block movement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11233:
-
Attachment: HDFS-11233-branch-2.001.patch

Sorry, I found the Jackson 2.7.8 also applied into branch-2. Patch for branch-2 
attached.

> Fix javac warnings related to the deprecated APIs after upgrading Jackson
> -
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11233-branch-2.001.patch, HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2016-12-12 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743948#comment-15743948
 ] 

SammiChen commented on HDFS-7859:
-

Hi Surendra, thanks for working on it!   One suggestion is you should upload 
your patch with a different name, such as "HDFS-7859.009.patch". Otherwise the 
auto integration test will not be triggered to test again your new patch. 

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR, hdfs-ec-3.0-must-do
> Attachments: HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch, 
> HDFS-7859.001.patch, HDFS-7859.002.patch, HDFS-7859.004.patch, 
> HDFS-7859.005.patch, HDFS-7859.006.patch, HDFS-7859.007.patch, 
> HDFS-7859.008.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11152) Start erasure coding policy ID number from 1 instead of 0 to void potential unexpected errors

2016-12-12 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743926#comment-15743926
 ] 

SammiChen commented on HDFS-11152:
--

Hi Andrew and Wei-Chiu, thanks for your advice. Sure, I will upload a new 
patch, adding test cases.  

> Start erasure coding policy ID number from 1 instead of 0 to void potential 
> unexpected errors
> -
>
> Key: HDFS-11152
> URL: https://issues.apache.org/jira/browse/HDFS-11152
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11152-v1.patch
>
>
> This task will change erasure coding policy ID number starting from 1 instead 
> of current 0, to avoid some potential unexpected errors in codes since 0 is 
> default value for integer variables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743925#comment-15743925
 ] 

Hudson commented on HDFS-11233:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10988 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10988/])
HDFS-11233. Fix javac warnings related to the deprecated APIs after (aajisaka: 
rev 2d4731c067ff64cd88f496eac8faaf302faa2ccc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/NodePlan.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/oauth2/ConfRefreshTokenBasedAccessTokenProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/connectors/JsonNodeConnector.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/oauth2/CredentialBasedAccessTokenProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerCluster.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerVolume.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java


> Fix javac warnings related to the deprecated APIs after upgrading Jackson
> -
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11226) cacheadmin,cryptoadmin and storagepolicyadmin should support generic options

2016-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743923#comment-15743923
 ] 

Hudson commented on HDFS-11226:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10988 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10988/])
HDFS-11226. cacheadmin, cryptoadmin and storagepolicyadmin should (liuml07: rev 
754f15bae61b81ad3c2e3f722d1feaebf374e2c4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/StoragePolicyAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java


> cacheadmin,cryptoadmin and storagepolicyadmin should support generic options
> 
>
> Key: HDFS-11226
> URL: https://issues.apache.org/jira/browse/HDFS-11226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11126.patch, HDFS-11226-002.patch, 
> HDFS-11226-003.patch
>
>
> When StoragePolicy cmd is used with -fs option -- 
> Following Error is thrown --
>  {color: red} hdfs storagepolicies -fs hdfs://hacluster -listPolicies
> Can't understand command '-fs' {color}
> Usage: bin/hdfs storagepolicies 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743894#comment-15743894
 ] 

Yiqun Lin edited comment on HDFS-11233 at 12/13/16 2:36 AM:


Thanks for the review and commit the patch. 
{quote}
Hi Yiqun Lin, would you provide a patch for branch-2?
{quote}
I am preparing to provide the patch for branch-2, but I found the Jackson 
version in branch-2 is 1.9.13 and there aren't 
{{ObjectReader#readerFor(Class type)}} and 
{{ObjectWriter#writerFor(JavaType)}}. I think we should remove jackson 1.9.13 
in {{pom.xml}} of hadoop-project first, right?


was (Author: linyiqun):
Thanks for the review and commit the patch. 
{quote}
Hi Yiqun Lin, would you provide a patch for branch-2?
{quote}
I am preparing to provide the patch for branch-2, but I found the Jackson 
version in branch-2 is 1.9.13 and there aren't 
{{ObjectReader#readerFor(Class type)}} and 
{{ObjectWriter#writerFor(JavaType)}}.

> Fix javac warnings related to the deprecated APIs after upgrading Jackson
> -
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743894#comment-15743894
 ] 

Yiqun Lin commented on HDFS-11233:
--

Thanks for the review and commit the patch. 
{quote}
Hi Yiqun Lin, would you provide a patch for branch-2?
{quote}
I am preparing to provide the patch for branch-2, but I found the Jackson 
version in branch-2 is 1.9.13 and there aren't 
{{ObjectReader#readerFor(Class type)}} and 
{{ObjectWriter#writerFor(JavaType)}}.

> Fix javac warnings related to the deprecated APIs after upgrading Jackson
> -
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11226) cacheadmin,cryptoadmin and storagepolicyadmin should support generic options

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11226:
-
Component/s: tools

> cacheadmin,cryptoadmin and storagepolicyadmin should support generic options
> 
>
> Key: HDFS-11226
> URL: https://issues.apache.org/jira/browse/HDFS-11226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11126.patch, HDFS-11226-002.patch, 
> HDFS-11226-003.patch
>
>
> When StoragePolicy cmd is used with -fs option -- 
> Following Error is thrown --
>  {color: red} hdfs storagepolicies -fs hdfs://hacluster -listPolicies
> Can't understand command '-fs' {color}
> Usage: bin/hdfs storagepolicies 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11226) cacheadmin,cryptoadmin and storagepolicyadmin should support generic options

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11226:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

+1

Failing tests are not related. Specially, 
{{TestTrashWithSecureEncryptionZones}} and 
{{TestTrashWithSecureEncryptionZones}} are are related to [HADOOP-13890].

I have committed to {{trunk}} and {{branch-2}} branches. When committing, I 
fixed the trivial checkstyle warnings. Thanks [~brahmareddy] for contribution; 
thanks [~linyiqun] for reviewing this.

Feel free to get this land on {{branch-2.8}} if you think it's nice to have. 
I'm +0 on that.

> cacheadmin,cryptoadmin and storagepolicyadmin should support generic options
> 
>
> Key: HDFS-11226
> URL: https://issues.apache.org/jira/browse/HDFS-11226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HDFS-11126.patch, HDFS-11226-002.patch, 
> HDFS-11226-003.patch
>
>
> When StoragePolicy cmd is used with -fs option -- 
> Following Error is thrown --
>  {color: red} hdfs storagepolicies -fs hdfs://hacluster -listPolicies
> Can't understand command '-fs' {color}
> Usage: bin/hdfs storagepolicies 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11233:
-
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2

I've committed this to trunk. Hi [~linyiqun], would you provide a patch for 
branch-2?

> Fix javac warnings related to the deprecated APIs after upgrading Jackson
> -
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10958) Add instrumentation hooks around Datanode disk IO

2016-12-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10958:
-
Attachment: HDFS-10958.05.patch

Thank you for the thorough review [~xyao]! I really appreciate it. The v05 
patch addresses most of your feedback (comments below). [This 
commit|https://github.com/arp7/hadoop/commit/1a5601460e830a161da1ad8ed586b3150c09971e#diff-5d7a0451c23486b45f5bda85b7022e5f]
 shows the delta between the v04 and v05 patches.

Comments below:
# bq. NIT: Can you update the comment (line 745, line 747) to reflect the 
changes of the returned type? "FileInputStream" -> "FileDescriptor"
Fixed.
#  bq. Line 149: BlockMetadataHeader#readHeader(File file) can be removed
Removed.
# bq. NIT: Line 1033: BlockReceiver#adjustCrcFilePosition().  can we use 
streams.flushChecksumOut() here?
We need to call flush on the buffered output stream here. Calling 
streams.flushChecksumOut() will not flush the buffered data to underlying 
FileOutputStream.
# bq. NIT: Line 59: Can we move DatanodeUtil#createFileWithExistsCheck to 
FileIoProvider like we do for mkdirsWithExistsCheck/deleteWithExistsCheck?
This method was awkward to adapt to the call pattern in FileIoProvider. However 
I do pass individual operations to the FileIoProvider so the exists/create 
calls will be instrumented. Let me know if you feel strongly about it. :)
# bq. Line 1365: DataStorage#fullyDelete(). I'm OK with deprecate it.
Done. Removed the unused method.
# bq. NIT: Can you add a short description for the new key added or add cross 
reference to the description in FileIoProvider class description.
I intentionally haven't documented this key as it's not targeted for end users. 
I have the following text in the FileIoProvider javadoc. Let me know if this 
looks sufficient for now.
{code}
 * Behavior can be injected into these events by implementing
 * {@link FileIoEvents} and replacing the default implementation
 * with {@link DFSConfigKeys#DFS_DATANODE_FILE_IO_EVENTS_CLASS_KEY}.
{code}
# bq. NIT: these imports re-ordered with the imports below it
I don't see this issue in my diffs. Let me know if you still see it.
# bq. Line 1075: DatanodeUtil.dirNoFilesRecursive() can be wrapped into 
FileIoProvider.java to get some aggregated metrics of dirNoFilesRecursive() in 
addition to FileIoProvider#listFiles().
I deferred doing since any disk slowness will show up in the 
fileIoProvider.listFiles call. Can we re-evaluate instrumenting the recursive 
call in a follow up jira?
# bq.  Line: 202: this is a bug. We should delete the tmpFile instead of the 
file.
Good catch, fixed.
# bq. Line 322,323: Should we close crcOut like blockOut and metataRAF here? 
Can this be improved with a try-with-resource to avoid leaking.
Good catch, fixed it. It looks like this is a pre-existing bug. We can't use 
try-with-resources though as we only want to close the streams when there is an 
exception.
# bq. Line 89: FileIoEvents#onFailure() can we add a begin parameter for the 
failure code path so that we can track the time spent on FileIo/Metadata before 
failure.
Done.
# bq. CountingFileIoEvents.java - Should we count the number of errors in 
onFailure()? 
Done.
# bq. FileIoProvider.java - NIT: some of the methods are missing Javadocs for 
the last few added @param such as flush()/listDirectory()/linkCount()/mkdirs, 
etc.
Added.
# bq. Line 105: NIT: We can add a tag to the enum FileIoProvider#OPERATION to 
explicitly describe the operation type FileIo/Metadata, which could simplify 
the FileIoEvents interface. I'm OK with the current implementation, which is 
also good and easy to follow. 
Leaving it as it is for now to avoid complicating the patch further, but we can 
definitely revise the interface as we work on implementations.
# bq. Line 155: I think we should put sync() under fileIo op instead of 
metadata op based on we are passing true
Done.
# bq. Line 459: FileIoProvider#fullyDelete() should we declare exception just 
for fault injection purpose? FileUtil.fullyDelete() itself does not throw. 
Good point. The only exception we could get in fullyDelete is a 
RuntimeException so there is no change to the signature. I decided to pass all 
exceptions to the failure handler (except errors) and let it decide which ones 
are interesting to it.
# bq. Line 575: NIT: File f -> File dir, Line 598: NIT: File f -> File dir
Fixed both.
# bq. Line 148: ReplicaOutputStreams#writeDataToDisk(), should we change the 
dataOut/checksumOut to use the FileIoProvider#WrappedFileoutputStream to get 
the FileIo write counted properly?
These are already wrapped output streams. See LocalReplicaInPipeline.java:310.
# bq.  Line 83 readDataFully() should we change the dataIn/checksumIn  to use 
the FileIoProvider#WrappedFileInputStream to get the FileIo read counted 
properly? 
These are also wrapped input streams. See LocalReplica#getDataInputStream where 
the streams are allocated.

I a

[jira] [Updated] (HDFS-11233) Fix javac warnings related to the deprecated APIs after upgrading Jackson

2016-12-12 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11233:
-
Summary: Fix javac warnings related to the deprecated APIs after upgrading 
Jackson  (was: The APIs to be deprecated after Jackson upgraded)

> Fix javac warnings related to the deprecated APIs after upgrading Jackson
> -
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11233) The APIs to be deprecated after Jackson upgraded

2016-12-12 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743824#comment-15743824
 ] 

Akira Ajisaka commented on HDFS-11233:
--

+1. I ran the failed tests and all of them without 
TestTrashWithSecureEncryptionZones and TestTrashWithSecureEncryptionZones are 
succeeded. The two failures are related to HADOOP-13890.

> The APIs to be deprecated after Jackson upgraded
> 
>
> Key: HDFS-11233
> URL: https://issues.apache.org/jira/browse/HDFS-11233
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11233.001.patch
>
>
> After HADOOP-12705, the Jackson version upgraded from 2.2.3 to 2.7.8. This 
> lead many APIs which we are used in hadoop being deprecated. Since 2.5, these 
> two APIs are deprecated: {{ObjectReader#readerFor(Class 
> type)}},{{ObjectWriter#writerWithType(JavaType 
> rootType)}}(http://fasterxml.github.io/jackson-databind/javadoc/2.6/com/fasterxml/jackson/databind/ObjectMapper.html).
> According to the doc of ObjectMapper, we can use 
> {{ObjectReader#readerFor(Class type)}} and 
> {{ObjectWriter#writerFor(JavaType)}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-12 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743813#comment-15743813
 ] 

Weiwei Yang edited comment on HDFS-11156 at 12/13/16 1:52 AM:
--

Hello [~andrew.wang]

Please help to review v14 patch when you get a chance, it addressed the 
comments from you with the changes I described 
[here|https://issues.apache.org/jira/browse/HDFS-11156?focusedCommentId=15739324&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15739324].
 The UT failure was not related, and the 2 checkstyle issues were not new as 
well (I explained ealier).

Thank you


was (Author: cheersyang):
Hello [~andrew.wang]

Please help to review v14 patch when you get a chance, it addressed the 
comments from you with the changes I described 
[here|https://issues.apache.org/jira/browse/HDFS-11156?focusedCommentId=15739324&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15739324].

Thank you

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: BlockLocationProperties_JSON_Schema.jpg, 
> BlockLocations_JSON_Schema.jpg, FileStatuses_JSON_Schema.jpg, 
> HDFS-11156.01.patch, HDFS-11156.02.patch, HDFS-11156.03.patch, 
> HDFS-11156.04.patch, HDFS-11156.05.patch, HDFS-11156.06.patch, 
> HDFS-11156.07.patch, HDFS-11156.08.patch, HDFS-11156.09.patch, 
> HDFS-11156.10.patch, HDFS-11156.11.patch, HDFS-11156.12.patch, 
> HDFS-11156.13.patch, HDFS-11156.14.patch, Output_JSON_format_v10.jpg, 
> SampleResponse_JSON.jpg
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-12 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743813#comment-15743813
 ] 

Weiwei Yang commented on HDFS-11156:


Hello [~andrew.wang]

Please help to review v14 patch when you get a chance, it addressed the 
comments from you with the changes I described 
[here|https://issues.apache.org/jira/browse/HDFS-11156?focusedCommentId=15739324&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15739324].

Thank you

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: BlockLocationProperties_JSON_Schema.jpg, 
> BlockLocations_JSON_Schema.jpg, FileStatuses_JSON_Schema.jpg, 
> HDFS-11156.01.patch, HDFS-11156.02.patch, HDFS-11156.03.patch, 
> HDFS-11156.04.patch, HDFS-11156.05.patch, HDFS-11156.06.patch, 
> HDFS-11156.07.patch, HDFS-11156.08.patch, HDFS-11156.09.patch, 
> HDFS-11156.10.patch, HDFS-11156.11.patch, HDFS-11156.12.patch, 
> HDFS-11156.13.patch, HDFS-11156.14.patch, Output_JSON_format_v10.jpg, 
> SampleResponse_JSON.jpg
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743798#comment-15743798
 ] 

Hadoop QA commented on HDFS-11182:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 21 new + 384 unchanged - 10 fixed = 405 total (was 394) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11182 |
| GITHUB PR | https://github.com/apache/hadoop/pull/168 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 59a4a67b1803 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6a3923 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17843/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17843/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17843/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Res

[jira] [Commented] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-12-12 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743764#comment-15743764
 ] 

Kai Zheng commented on HDFS-8411:
-

Thanks Sammi for the update! It needs a rebase since HDFS-11368 was in.

> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch, 
> HDFS-8411-006.patch, HDFS-8411-007.patch, HDFS-8411-008.patch, 
> HDFS-8411-009.patch, HDFS-8411-011.patch, HDFS-8411-012.patch, 
> HDFS-8411.010.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11226) cacheadmin,cryptoadmin and storagepolicyadmin should support generic options

2016-12-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743743#comment-15743743
 ] 

Brahma Reddy Battula commented on HDFS-11226:
-

[~liuml07] can you check the latest patch..May be while committing I can fix 
the checkstyle Or I can upload new patch.Testfailures are unrelated..

> cacheadmin,cryptoadmin and storagepolicyadmin should support generic options
> 
>
> Key: HDFS-11226
> URL: https://issues.apache.org/jira/browse/HDFS-11226
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-11126.patch, HDFS-11226-002.patch, 
> HDFS-11226-003.patch
>
>
> When StoragePolicy cmd is used with -fs option -- 
> Following Error is thrown --
>  {color: red} hdfs storagepolicies -fs hdfs://hacluster -listPolicies
> Can't understand command '-fs' {color}
> Usage: bin/hdfs storagepolicies 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11226) cacheadmin,cryptoadmin and storagepolicyadmin should support generic options

2016-12-12 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11226:

Summary: cacheadmin,cryptoadmin and storagepolicyadmin should support 
generic options  (was: Storagepolicy command is not working with "-fs" option)

> cacheadmin,cryptoadmin and storagepolicyadmin should support generic options
> 
>
> Key: HDFS-11226
> URL: https://issues.apache.org/jira/browse/HDFS-11226
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-11126.patch, HDFS-11226-002.patch, 
> HDFS-11226-003.patch
>
>
> When StoragePolicy cmd is used with -fs option -- 
> Following Error is thrown --
>  {color: red} hdfs storagepolicies -fs hdfs://hacluster -listPolicies
> Can't understand command '-fs' {color}
> Usage: bin/hdfs storagepolicies 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11238) checkstyle problem in NameNode.java

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743572#comment-15743572
 ] 

Hadoop QA commented on HDFS-11238:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 94 unchanged - 69 fixed = 94 total (was 163) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11238 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842883/HDFS-11238.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2f5529bdd827 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f66f618 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17842/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17842/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17842/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> checkstyle problem in NameNode.java
> ---
>

[jira] [Commented] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2016-12-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743495#comment-15743495
 ] 

Arpit Agarwal commented on HDFS-11182:
--

Updated the pull request to address unit test failures from the last run. 
Removed two unit tests:
# _FsVolumeList#testCheckDirsWithClosedVolume_ - replaced with 
_TestDatasetVolumeCheckerFailures#testCheckingClosedVolume_.
# _TestFsDatasetImpl#testChangeVolumeWithRunningCheckDirs_ - no longer relevant 
as volume checks are always parallelized on an initial snapshot of the volume 
list.

> Update DataNode to use DatasetVolumeChecker
> ---
>
> Key: HDFS-11182
> URL: https://issues.apache.org/jira/browse/HDFS-11182
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11152) Start erasure coding policy ID number from 1 instead of 0 to void potential unexpected errors

2016-12-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743430#comment-15743430
 ] 

Wei-Chiu Chuang commented on HDFS-11152:


Ping. Yeah, I agree some extra regression tests are greatly appreciated.

> Start erasure coding policy ID number from 1 instead of 0 to void potential 
> unexpected errors
> -
>
> Key: HDFS-11152
> URL: https://issues.apache.org/jira/browse/HDFS-11152
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11152-v1.patch
>
>
> This task will change erasure coding policy ID number starting from 1 instead 
> of current 0, to avoid some potential unexpected errors in codes since 0 is 
> default value for integer variables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10958) Add instrumentation hooks around Datanode disk IO

2016-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743429#comment-15743429
 ] 

Xiaoyu Yao commented on HDFS-10958:
---

Thanks [~arpitagarwal] for working on this. The latest patch looks pretty good 
to me. Just have a few minor questions/issues below.

1. NatievIO.java#getShareDeleteFileDescriptor
NIT: Can you update the comment (line 745, line 747) to reflect the changes of 
the
returned type? "FileInputStream" -> "FileDescriptor"

2. BlockMetadataHeader.java
Line 149: BlockMetadataHeader#readHeader(File file) can be removed

Line 85: From the caller of BlockMetadataHeader#readDataChecksum() in 
FsDatasetImpl#computeChecksum, we can get a hook for FileInputStream. Is it 
possible
to add hook for readDataCheckum into FileIoProvider or a WrappedFileInputStream
for measurement of the reading performance.

3. BlockReceiver.java
NIT: Line 1033: BlockReceiver#adjustCrcFilePosition() 
can we use streams.flushChecksumOut() here?

4. DatanodeUtil.java
NIT: Line 59: Can we move DatanodeUtil#createFileWithExistsCheck to 
FileIoProvider like
we do for mkdirsWithExistsCheck/deleteWithExistsCheck?

Line 1365: DataStorage#fullyDelete(). I'm OK with deprecate it.
There seems to be no reference to this method. So maybe we can remove it.

5. DFSConfigKeys.java
NIT: Can you add a short description for the new key added or add cross 
reference to
the description in FileIoProvider class description.

6. FsDatasetImpl.java
NIT: these imports re-ordered with the imports below it
(only one added from this change though)
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DFSUtilClient;
import org.apache.hadoop.hdfs.ExtendedBlockId;
import org.apache.hadoop.hdfs.server.datanode.FileIoProvider;
import org.apache.hadoop.util.AutoCloseableLock;

7. FSVolumeImpl.java
Line 1075: DatanodeUtil.dirNoFilesRecursive() can be wrapped into 
FileIoProvider.java to
get some aggregated metrics of dirNoFilesRecursive() in addition to 
FileIoProvider#listFiles().

 8. LocalReplica.java
Line: 202: this is a bug. We should delete the tmpFile instead of the file.
{code}
if (!fileIoProvider.delete(getVolume(), file)) 
{code}

9. LocalReplicaInPipeline.java
Line 322,323: Should we close crcOut like blockOut and metataRAF here? 
Can this be improved with a try-with-resource to avoid leaking.

10. FileIoEvents.java
Line 89: FileIoEvents#onFailure() can we add a begin parameter for the failure 
code path so that we can track the time spent on FileIo/Metadata before failure.

11. CountingFileIoEvents.java
Should we count the number of errors in onFailure()? 

12. FileIoProvider.java
NIT: some of the methods are missing Javadocs for the last few added 
@param such as flush()/listDirectory()/linkCount()/mkdirs, etc.

Line 105: NIT: We can add a tag to the enum FileIoProvider#OPERATION to 
explicitly
describe the operation type FileIo/Metadata, which could simplify the 
FileIoEvents interface. 
I'm OK with the current implementation, which is also good and easy to follow. 

Line 155: I think we should put sync() under fileIo op instead of metadata op 
based
on we are passing true to {code}fos.getChannel().force(true);{code}, which force
both metadata and data written on device.

Line 459: FileIoProvider#fullyDelete() should we declare exception just for 
fault
injection purpose? FileUtil.fullyDelete() itself does not throw. 

Line 575: NIT: File f -> File dir
Line 598: NIT: File f -> File dir

13. ReplicaOutputStreams.java
Line 148: ReplicaOutputStreams#writeDataToDisk(), should we change 
the dataOut/checksumOut to use the FileIoProvider#WrappedFileoutputStream 
to get the FileIo write counted properly?

14. ReplicaInputStreams.java
Line 83 readDataFully() should we change the dataIn/checksumIn 
to use the FileIoProvider#WrappedFileInputStream to get the FileIo read counted 
properly? 




> Add instrumentation hooks around Datanode disk IO
> -
>
> Key: HDFS-10958
> URL: https://issues.apache.org/jira/browse/HDFS-10958
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Arpit Agarwal
> Attachments: HDFS-10958.01.patch, HDFS-10958.02.patch, 
> HDFS-10958.03.patch, HDFS-10958.04.patch
>
>
> This ticket is opened to add instrumentation hooks around Datanode disk IO 
> based on refactor work from HDFS-10930.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11238) checkstyle problem in NameNode.java

2016-12-12 Thread Ethan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Li updated HDFS-11238:

Status: Patch Available  (was: Open)

There are many checkstyle problems in Namenode.java file. I am afraid to 
influence to much to those possible pending patches. So this current patch only 
target at those at createNameNode method.

> checkstyle problem in NameNode.java
> ---
>
> Key: HDFS-11238
> URL: https://issues.apache.org/jira/browse/HDFS-11238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ethan Li
>Priority: Trivial
> Attachments: HDFS-11238.001.patch
>
>
> switch-case should at the same level, avoid nested block, Array brackets at 
> illegal position



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11238) checkstyle problem in NameNode.java

2016-12-12 Thread Ethan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Li updated HDFS-11238:

Attachment: HDFS-11238.001.patch

> checkstyle problem in NameNode.java
> ---
>
> Key: HDFS-11238
> URL: https://issues.apache.org/jira/browse/HDFS-11238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Ethan Li
>Priority: Trivial
> Attachments: HDFS-11238.001.patch
>
>
> switch-case should at the same level, avoid nested block, Array brackets at 
> illegal position



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11238) checkstyle problem in NameNode.java

2016-12-12 Thread Ethan Li (JIRA)
Ethan Li created HDFS-11238:
---

 Summary: checkstyle problem in NameNode.java
 Key: HDFS-11238
 URL: https://issues.apache.org/jira/browse/HDFS-11238
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Ethan Li
Priority: Trivial


switch-case should at the same level, avoid nested block, Array brackets at 
illegal position



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2016-12-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743244#comment-15743244
 ] 

Wei-Chiu Chuang edited comment on HDFS-11121 at 12/12/16 9:47 PM:
--

[~tasanuma0829] thanks for the patch. I reviewed the patch and I think it's 
mostly Okay barring a few questions.

# I wonder if we can add a similar isStripedBlockId assertion in 
BlockInfoStriped constructor.
# This is unrelated to your patch, but do you know if it is true that a 
stripped block _always_ has a negative block id? Looking up 
{{BlockIdManager#isStripedBlockID}} it seems true. I wasn't involved in the 
design stage of EC, but this assumption is risky -- i have seen hadoop-2 
clusters (with no stripped files of course) showing negative block ids. So 
maybe you can also consider adding a !isStripedBlockId assertion check in 
BlockInfoContinous.

Of course, we need to understand why the id of a continuous block can go below 
zero.


was (Author: jojochuang):
[~tasanuma0829] thanks for the patch. I reviewed the patch and I think it's 
mostly Okay barring a few questions.

# I wonder if we can add a similar isStripedBlockId assertion in 
BlockInfoStriped constructor.
# This is unrelated to your patch, but do you know if it is true that a 
stripped block _always_ has a negative block id? Looking up 
{{BlockIdManager#isStripedBlockID}} it seems true. I wasn't involved in the 
design stage of EC, but this assumption is risky -- i have seen hadoop-2 
clusters (with no stripped files of course) showing negative block ids.

> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11121.1.patch
>
>
> There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
> {{BlockInfo}} instances accept strange block reports and result in serious 
> bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11121) Add assertions to BlockInfo#addStorage to protect from breaking reportedBlock-blockGroup mapping

2016-12-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743244#comment-15743244
 ] 

Wei-Chiu Chuang commented on HDFS-11121:


[~tasanuma0829] thanks for the patch. I reviewed the patch and I think it's 
mostly Okay barring a few questions.

# I wonder if we can add a similar isStripedBlockId assertion in 
BlockInfoStriped constructor.
# This is unrelated to your patch, but do you know if it is true that a 
stripped block _always_ has a negative block id? Looking up 
{{BlockIdManager#isStripedBlockID}} it seems true. I wasn't involved in the 
design stage of EC, but this assumption is risky -- i have seen hadoop-2 
clusters (with no stripped files of course) showing negative block ids.

> Add assertions to BlockInfo#addStorage to protect from breaking 
> reportedBlock-blockGroup mapping
> 
>
> Key: HDFS-11121
> URL: https://issues.apache.org/jira/browse/HDFS-11121
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11121.1.patch
>
>
> There are not any assertions in {{BlockInfo.addStorage}}. This may cause that 
> {{BlockInfo}} instances accept strange block reports and result in serious 
> bugs, like HDFS-10858.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11164) Mover should avoid unnecessary retries if the block is pinned

2016-12-12 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743238#comment-15743238
 ] 

Uma Maheswara Rao G commented on HDFS-11164:


Thanks a lot, [~surendrasingh] for verification and confirming.

Thank you for suggesting ideas. In general its a nice idea, but the fact we 
should consider is, keeping less maintenance work at NN. Since this is not 
critical namespace info/block info, its ok to leave this info to DN. 

I will go ahead to commit this patch!

> Mover should avoid unnecessary retries if the block is pinned
> -
>
> Key: HDFS-11164
> URL: https://issues.apache.org/jira/browse/HDFS-11164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11164-00.patch, HDFS-11164-01.patch, 
> HDFS-11164-02.patch, HDFS-11164-03.patch
>
>
> When mover is trying to move a pinned block to another datanode, it will 
> internally hits the following IOException and mark the block movement as 
> {{failure}}. Since the Mover has {{dfs.mover.retry.max.attempts}} configs, it 
> will continue moving this block until it reaches {{retryMaxAttempts}}. If the 
> block movement failure(s) are only due to block pinning, then retry is 
> unnecessary. The idea of this jira is to avoid retry attempts of pinned 
> blocks as they won't be able to move to a different node. 
> {code}
> 2016-11-22 10:56:10,537 WARN 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher: Failed to move 
> blk_1073741825_1001 with size=52 from 127.0.0.1:19501:DISK to 
> 127.0.0.1:19758:ARCHIVE through 127.0.0.1:19501
> java.io.IOException: Got error, status=ERROR, status message opReplaceBlock 
> BP-1772076264-10.252.146.200-1479792322960:blk_1073741825_1001 received 
> exception java.io.IOException: Got error, status=ERROR, status message Not 
> able to copy block 1073741825 to /127.0.0.1:19826 because it's pinned , copy 
> block BP-1772076264-10.252.146.200-1479792322960:blk_1073741825_1001 from 
> /127.0.0.1:19501, reportedBlock move is failed
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:118)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:417)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:358)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$5(Dispatcher.java:322)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:1075)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter

2016-12-12 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743081#comment-15743081
 ] 

Eric Badger commented on HDFS-11094:


Test failure looks like it's unrelated and doesn't fail for me locally

> Send back HAState along with NamespaceInfo during a versionRequest as an 
> optional parameter
> ---
>
> Key: HDFS-11094
> URL: https://issues.apache.org/jira/browse/HDFS-11094
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, 
> HDFS-11094.003.patch, HDFS-11094.004.patch, HDFS-11094.005.patch, 
> HDFS-11094.006.patch, HDFS-11094.007.patch, HDFS-11094.008.patch, 
> HDFS-11094.009-b2.patch, HDFS-11094.009.patch, HDFS-11094.010-b2.patch, 
> HDFS-11094.010.patch, HDFS-11094.011.patch
>
>
> The datanode should know which NN is active when it is connecting/registering 
> to the NN. Currently, it only figures this out during its first (and 
> subsequent) heartbeat(s) and so there is a period of time where the datanode 
> is alive and registered, but can't actually do anything because it doesn't 
> know which NN is active. A byproduct of this is that the MiniDFSCluster will 
> become active before it knows what NN is active, which can lead to NPEs when 
> calling getActiveNN(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11232) System.err should be System.out

2016-12-12 Thread Ethan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743077#comment-15743077
 ] 

Ethan Li commented on HDFS-11232:
-

Thanks! Ewan. 
I think maybe it's good to submit another patch to only fix the checkstyle 
error since checkHaStateChange in the same file follows the checkstyle 
requirements , as you mentioned.

> System.err should be System.out
> ---
>
> Key: HDFS-11232
> URL: https://issues.apache.org/jira/browse/HDFS-11232
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Ethan Li
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11232.001.patch
>
>
> In 
> /Users/Ethan/Worksplace/IntelliJWorkspace/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java,
>  System.err.println("Generating new cluster id:"); is used. I think it should 
> be System.out.println(...) since this is not an error message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf

2016-12-12 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15743045#comment-15743045
 ] 

Chris Douglas commented on HDFS-6984:
-

bq. Can we split the Serializable stuff into a separate change?
Moved to HADOOP-13895

> In Hadoop 3, make FileStatus serialize itself via protobuf
> --
>
> Key: HDFS-6984
> URL: https://issues.apache.org/jira/browse/HDFS-6984
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6984.001.patch, HDFS-6984.002.patch, 
> HDFS-6984.003.patch, HDFS-6984.nowritable.patch
>
>
> FileStatus was a Writable in Hadoop 2 and earlier.  Originally, we used this 
> to serialize it and send it over the wire.  But in Hadoop 2 and later, we 
> have the protobuf {{HdfsFileStatusProto}} which serves to serialize this 
> information.  The protobuf form is preferable, since it allows us to add new 
> fields in a backwards-compatible way.  Another issue is that already a lot of 
> subclasses of FileStatus don't override the Writable methods of the 
> superclass, breaking the interface contract that read(status.write) should be 
> equal to the original status.
> In Hadoop 3, we should just make FileStatus serialize itself via protobuf so 
> that we don't have to deal with these issues.  It's probably too late to do 
> this in Hadoop 2, since user code may be relying on the existing FileStatus 
> serialization there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11160) VolumeScanner reports write-in-progress replicas as corrupt incorrectly

2016-12-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742965#comment-15742965
 ] 

Wei-Chiu Chuang commented on HDFS-11160:


Test failures are unrelated and are the result of HADOOP-13565.

> VolumeScanner reports write-in-progress replicas as corrupt incorrectly
> ---
>
> Key: HDFS-11160
> URL: https://issues.apache.org/jira/browse/HDFS-11160
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: CDH5.7.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11160.001.patch, HDFS-11160.002.patch, 
> HDFS-11160.003.patch, HDFS-11160.004.patch, HDFS-11160.005.patch, 
> HDFS-11160.006.patch, HDFS-11160.branch-2.patch, HDFS-11160.reproduce.patch
>
>
> Due to a race condition initially reported in HDFS-6804, VolumeScanner may 
> erroneously detect good replicas as corrupt. This is serious because in some 
> cases it results in data loss if all replicas are declared corrupt. This bug 
> is especially prominent when there are a lot of append requests via 
> HttpFs/WebHDFS.
> We are investigating an incidence that caused very high block corruption rate 
> in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. 
> However, after applying HDFS-11056, we are still seeing VolumeScanner 
> reporting corrupt replicas.
> It turns out that if a replica is being appended while VolumeScanner is 
> scanning it, VolumeScanner may use the new checksum to compare against old 
> data, causing checksum mismatch.
> I have a unit test to reproduce the error. Will attach later. A quick and 
> simple fix is to hold FsDatasetImpl lock and read from disk the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11131) TestThrottledAsyncChecker#testContextIsPassed is flaky

2016-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742952#comment-15742952
 ] 

Xiaoyu Yao commented on HDFS-11131:
---

Another instance here: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11248/testReport/

> TestThrottledAsyncChecker#testContextIsPassed is flaky
> --
>
> Key: HDFS-11131
> URL: https://issues.apache.org/jira/browse/HDFS-11131
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> This test failed in a few precommit runs. e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/17481/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testContextIsPassed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11123) [SPS] Make storage policy satisfier daemon work on/off dynamically

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742805#comment-15742805
 ] 

Hadoop QA commented on HDFS-11123:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
33s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.server.namenode.TestStoragePolicySatisfier |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11123 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842637/HDFS-11123-HDFS-10285-00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux d0a385b5dae6 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 94e3583 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17841/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17841/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17841/testReport

[jira] [Commented] (HDFS-11234) distcp performance is suboptimal for high bandwidth/high latency setups

2016-12-12 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742769#comment-15742769
 ] 

Mingliang Liu commented on HDFS-11234:
--

Can you have a look at [HDFS-10326]? I think it's related. But that patch is 
still open. Maybe we can consolidate the efforts. Thanks,

> distcp performance is suboptimal for high bandwidth/high latency setups
> ---
>
> Key: HDFS-11234
> URL: https://issues.apache.org/jira/browse/HDFS-11234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Suresh Bahuguna
>
> Because distcp uses tcp socket with buffer size set to 128K, for a setup 
> which has very high bandwidth but also a very high latency, the throughput is 
> quite poor. This is because tcp stops sending more data till the time it gets 
> the ACKs. By not setting the socket size and letting linux kernel manage the 
> socket, we should be able to get optimal performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11234) distcp performance is suboptimal for high bandwidth/high latency setups

2016-12-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11234:
-
Assignee: Suresh Bahuguna

> distcp performance is suboptimal for high bandwidth/high latency setups
> ---
>
> Key: HDFS-11234
> URL: https://issues.apache.org/jira/browse/HDFS-11234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Suresh Bahuguna
>Assignee: Suresh Bahuguna
>
> Because distcp uses tcp socket with buffer size set to 128K, for a setup 
> which has very high bandwidth but also a very high latency, the throughput is 
> quite poor. This is because tcp stops sending more data till the time it gets 
> the ACKs. By not setting the socket size and letting linux kernel manage the 
> socket, we should be able to get optimal performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11164) Mover should avoid unnecessary retries if the block is pinned

2016-12-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742764#comment-15742764
 ] 

Rakesh R commented on HDFS-11164:
-

Thanks a lot [~surendrasingh] for the test feedback.

IMHO, this information is not required to place in NN and the first failure is 
OK considering the additional cost of maintaining the block pinned info at the 
NN server and again this has to be synced up during DN restart via block report 
etc. Also, I've gone through the HDFS-6133 comments and [I could see some 
interesting discussion about the design of block 
pinning|https://issues.apache.org/jira/browse/HDFS-6133?focusedCommentId=13984113&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13984113].
 Do you agree to go ahead with the proposed approach in the patch?

> Mover should avoid unnecessary retries if the block is pinned
> -
>
> Key: HDFS-11164
> URL: https://issues.apache.org/jira/browse/HDFS-11164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11164-00.patch, HDFS-11164-01.patch, 
> HDFS-11164-02.patch, HDFS-11164-03.patch
>
>
> When mover is trying to move a pinned block to another datanode, it will 
> internally hits the following IOException and mark the block movement as 
> {{failure}}. Since the Mover has {{dfs.mover.retry.max.attempts}} configs, it 
> will continue moving this block until it reaches {{retryMaxAttempts}}. If the 
> block movement failure(s) are only due to block pinning, then retry is 
> unnecessary. The idea of this jira is to avoid retry attempts of pinned 
> blocks as they won't be able to move to a different node. 
> {code}
> 2016-11-22 10:56:10,537 WARN 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher: Failed to move 
> blk_1073741825_1001 with size=52 from 127.0.0.1:19501:DISK to 
> 127.0.0.1:19758:ARCHIVE through 127.0.0.1:19501
> java.io.IOException: Got error, status=ERROR, status message opReplaceBlock 
> BP-1772076264-10.252.146.200-1479792322960:blk_1073741825_1001 received 
> exception java.io.IOException: Got error, status=ERROR, status message Not 
> able to copy block 1073741825 to /127.0.0.1:19826 because it's pinned , copy 
> block BP-1772076264-10.252.146.200-1479792322960:blk_1073741825_1001 from 
> /127.0.0.1:19501, reportedBlock move is failed
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:118)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:417)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:358)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$5(Dispatcher.java:322)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:1075)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11164) Mover should avoid unnecessary retries if the block is pinned

2016-12-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742754#comment-15742754
 ] 

Rakesh R commented on HDFS-11164:
-

Thank you [~umamaheswararao] for the reviews.

> Mover should avoid unnecessary retries if the block is pinned
> -
>
> Key: HDFS-11164
> URL: https://issues.apache.org/jira/browse/HDFS-11164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11164-00.patch, HDFS-11164-01.patch, 
> HDFS-11164-02.patch, HDFS-11164-03.patch
>
>
> When mover is trying to move a pinned block to another datanode, it will 
> internally hits the following IOException and mark the block movement as 
> {{failure}}. Since the Mover has {{dfs.mover.retry.max.attempts}} configs, it 
> will continue moving this block until it reaches {{retryMaxAttempts}}. If the 
> block movement failure(s) are only due to block pinning, then retry is 
> unnecessary. The idea of this jira is to avoid retry attempts of pinned 
> blocks as they won't be able to move to a different node. 
> {code}
> 2016-11-22 10:56:10,537 WARN 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher: Failed to move 
> blk_1073741825_1001 with size=52 from 127.0.0.1:19501:DISK to 
> 127.0.0.1:19758:ARCHIVE through 127.0.0.1:19501
> java.io.IOException: Got error, status=ERROR, status message opReplaceBlock 
> BP-1772076264-10.252.146.200-1479792322960:blk_1073741825_1001 received 
> exception java.io.IOException: Got error, status=ERROR, status message Not 
> able to copy block 1073741825 to /127.0.0.1:19826 because it's pinned , copy 
> block BP-1772076264-10.252.146.200-1479792322960:blk_1073741825_1001 from 
> /127.0.0.1:19501, reportedBlock move is failed
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:118)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:417)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:358)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$5(Dispatcher.java:322)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:1075)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10958) Add instrumentation hooks around Datanode disk IO

2016-12-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742723#comment-15742723
 ] 

Arpit Agarwal commented on HDFS-10958:
--

Thanks [~xyao], TestDFSClientRetries is unrelated. I ran it multiple times 
locally with my patch and it passed.

> Add instrumentation hooks around Datanode disk IO
> -
>
> Key: HDFS-10958
> URL: https://issues.apache.org/jira/browse/HDFS-10958
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Arpit Agarwal
> Attachments: HDFS-10958.01.patch, HDFS-10958.02.patch, 
> HDFS-10958.03.patch, HDFS-10958.04.patch
>
>
> This ticket is opened to add instrumentation hooks around Datanode disk IO 
> based on refactor work from HDFS-10930.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10684) WebHDFS DataNode calls fail without parameter createparent

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742715#comment-15742715
 ] 

Hadoop QA commented on HDFS-10684:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10684 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842820/HDFS-10684.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5068d1679936 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f66f618 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17839/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17839/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/j

[jira] [Commented] (HDFS-11188) Change min supported DN and NN versions back to 2.x

2016-12-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742655#comment-15742655
 ] 

Andrew Wang commented on HDFS-11188:


Hi [~yzhangal] could I get a re-review?

> Change min supported DN and NN versions back to 2.x
> ---
>
> Key: HDFS-11188
> URL: https://issues.apache.org/jira/browse/HDFS-11188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HDFS-11188.001.patch
>
>
> This is the inverse of HDFS-10398 and HADOOP-13142. Currently, trunk requires 
> a software DN and NN version of 3.0.0-alpha1. This means we cannot perform a 
> rolling upgrade from 2.x to 3.x.
> The first step towards supporting rolling upgrade is changing these back to a 
> 2.x version. For reference, branch-2 has these versions set to "2.1.0-beta".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11160) VolumeScanner reports write-in-progress replicas as corrupt incorrectly

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742599#comment-15742599
 ] 

Hadoop QA commented on HDFS-11160:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 187 unchanged - 1 fixed = 189 total (was 188) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11160 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842819/HDFS-11160.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3c4530319c92 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f66f618 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17840/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17840/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17840/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17840/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> VolumeScanner reports write-in-progress replic

[jira] [Commented] (HDFS-11123) [SPS] Make storage policy satisfier daemon work on/off dynamically

2016-12-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742556#comment-15742556
 ] 

Rakesh R commented on HDFS-11123:
-

It seems, Jenkins not picked up the latest patch. I've triggered Jenkins 
manually to get the report.

> [SPS] Make storage policy satisfier daemon work on/off dynamically
> --
>
> Key: HDFS-11123
> URL: https://issues.apache.org/jira/browse/HDFS-11123
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-HDFS-11123.00.patch, 
> HDFS-11123-HDFS-10285-00.patch
>
>
> The idea of this task is to make SPS daemon thread to start/stop dynamically 
> in Namenode process with out needing to restart complete Namenode.
> So, this will help in the case of admin wants to switch of this SPS and wants 
> to run Mover tool externally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11123) [SPS] Make storage policy satisfier daemon work on/off dynamically

2016-12-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742548#comment-15742548
 ] 

Rakesh R commented on HDFS-11123:
-

Thanks [~umamaheswararao] for the patch. Just two comments:
# Can we avoid adding {{blksMovementResults}} to the monitor thread if SPS is 
not running.
{code}
FSNamesystem#handleHeartbeat
  if (blockManager.getStoragePolicySatisfier() != null) {
blockManager.getStoragePolicySatisfier()
.handleBlocksStorageMovementResults(blksMovementResults);
  }
{code}
# It would be good to add debug logs here.
{code}
+if (sps == null || sps.isRunning()) {
+  return;
+}
{code}
{code}
+if (sps == null) {
+  return;
+}
{code}

> [SPS] Make storage policy satisfier daemon work on/off dynamically
> --
>
> Key: HDFS-11123
> URL: https://issues.apache.org/jira/browse/HDFS-11123
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-HDFS-11123.00.patch, 
> HDFS-11123-HDFS-10285-00.patch
>
>
> The idea of this task is to make SPS daemon thread to start/stop dynamically 
> in Namenode process with out needing to restart complete Namenode.
> So, this will help in the case of admin wants to switch of this SPS and wants 
> to run Mover tool externally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11123) [SPS] Make storage policy satisfier daemon work on/off dynamically

2016-12-12 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742543#comment-15742543
 ] 

Rakesh R commented on HDFS-11123:
-

Yes, good point.

> [SPS] Make storage policy satisfier daemon work on/off dynamically
> --
>
> Key: HDFS-11123
> URL: https://issues.apache.org/jira/browse/HDFS-11123
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-HDFS-11123.00.patch, 
> HDFS-11123-HDFS-10285-00.patch
>
>
> The idea of this task is to make SPS daemon thread to start/stop dynamically 
> in Namenode process with out needing to restart complete Namenode.
> So, this will help in the case of admin wants to switch of this SPS and wants 
> to run Mover tool externally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10958) Add instrumentation hooks around Datanode disk IO

2016-12-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742533#comment-15742533
 ] 

Xiaoyu Yao commented on HDFS-10958:
---

Thanks [~arpitagarwal] for updating the patch. I'll review the latest patch 
today. 

The following three unit test failures are tracked by HADOOP-13980. 
{code}
hadoop.security.token.delegation.web.TestWebDelegationToken
hadoop.hdfs.TestSecureEncryptionZoneWithKMS
hadoop.hdfs.TestTrashWithSecureEncryptionZones
{code}

Can you confirm if the following unit test failure is related to this patch or 
not?
{code}
hadoop.hdfs.TestDFSClientRetries
{code}

> Add instrumentation hooks around Datanode disk IO
> -
>
> Key: HDFS-10958
> URL: https://issues.apache.org/jira/browse/HDFS-10958
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Arpit Agarwal
> Attachments: HDFS-10958.01.patch, HDFS-10958.02.patch, 
> HDFS-10958.03.patch, HDFS-10958.04.patch
>
>
> This ticket is opened to add instrumentation hooks around Datanode disk IO 
> based on refactor work from HDFS-10930.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742457#comment-15742457
 ] 

Hadoop QA commented on HDFS-11193:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 260 unchanged - 2 fixed = 260 total (was 262) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11193 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842814/HDFS-11193-HDFS-10285-02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux a4c3667cfa70 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 94e3583 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17838/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17838/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17838/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Erasure coded files should be

[jira] [Updated] (HDFS-10684) WebHDFS DataNode calls fail without parameter createparent

2016-12-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10684:
--
Attachment: HDFS-10684.004.patch

Patch 004:
* Move added tests into a new method {{testDatanodeCreateMissingParameter}} to 
avoid checkstyle error

> WebHDFS DataNode calls fail without parameter createparent
> --
>
> Key: HDFS-10684
> URL: https://issues.apache.org/jira/browse/HDFS-10684
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Samuel Low
>Assignee: John Zhuge
>Priority: Blocker
>  Labels: compatibility, webhdfs
> Attachments: HDFS-10684.001-branch-2.patch, 
> HDFS-10684.002-branch-2.patch, HDFS-10684.003.patch, HDFS-10684.004.patch
>
>
> Optional boolean parameters that are not provided in the URL cause the 
> WebHDFS create file command to fail.
> curl -i -X PUT 
> "http://hadoop-primarynamenode:50070/webhdfs/v1/tmp/test1234?op=CREATE&overwrite=false";
> Response:
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> Location: 
> http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false
> Content-Length: 0
> Server: Jetty(6.1.26)
> Following the redirect:
> curl -i -X PUT -T MYFILE 
> "http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false";
> Response:
> HTTP/1.1 100 Continue
> HTTP/1.1 400 Bad Request
> Content-Type: application/json; charset=utf-8
> Content-Length: 162
> Connection: close
> 
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Failed
>  to parse \"null\" to Boolean."}}
> The problem can be circumvented by providing both "createparent" and 
> "overwrite" parameters.
> However, this is not possible when I have no control over the WebHDFS calls, 
> e.g. Ambari and Hue have errors due to this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11160) VolumeScanner reports write-in-progress replicas as corrupt incorrectly

2016-12-12 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11160:
---
Attachment: HDFS-11160.006.patch

v006 patch. throw ioexception, instead of returning null if it can't read 
checksum from meta file.

> VolumeScanner reports write-in-progress replicas as corrupt incorrectly
> ---
>
> Key: HDFS-11160
> URL: https://issues.apache.org/jira/browse/HDFS-11160
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: CDH5.7.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-11160.001.patch, HDFS-11160.002.patch, 
> HDFS-11160.003.patch, HDFS-11160.004.patch, HDFS-11160.005.patch, 
> HDFS-11160.006.patch, HDFS-11160.branch-2.patch, HDFS-11160.reproduce.patch
>
>
> Due to a race condition initially reported in HDFS-6804, VolumeScanner may 
> erroneously detect good replicas as corrupt. This is serious because in some 
> cases it results in data loss if all replicas are declared corrupt. This bug 
> is especially prominent when there are a lot of append requests via 
> HttpFs/WebHDFS.
> We are investigating an incidence that caused very high block corruption rate 
> in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. 
> However, after applying HDFS-11056, we are still seeing VolumeScanner 
> reporting corrupt replicas.
> It turns out that if a replica is being appended while VolumeScanner is 
> scanning it, VolumeScanner may use the new checksum to compare against old 
> data, causing checksum mismatch.
> I have a unit test to reproduce the error. Will attach later. A quick and 
> simple fix is to hold FsDatasetImpl lock and read from disk the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11237) NameNode reports incorrect file size

2016-12-12 Thread Erik Bergenholtz (JIRA)
Erik Bergenholtz created HDFS-11237:
---

 Summary: NameNode reports incorrect file size
 Key: HDFS-11237
 URL: https://issues.apache.org/jira/browse/HDFS-11237
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.1
Reporter: Erik Bergenholtz


The [hdfs] file /data/app-logs/log is continuously being written to by yarn 
process.

However, checking the file size through: 
hadoop fs -du /data/app-logs/log shows incorrect file-size after a few minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11226) Storagepolicy command is not working with "-fs" option

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742295#comment-15742295
 ] 

Hadoop QA commented on HDFS-11226:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 33 unchanged - 0 fixed = 34 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11226 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842802/HDFS-11226-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8bc52d7c533a 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4c38f11 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17837/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17837/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17837/testReport/ |
| modules | C: h

[jira] [Commented] (HDFS-11232) System.err should be System.out

2016-12-12 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742289#comment-15742289
 ] 

Ewan Higgs commented on HDFS-11232:
---

{quote}Hi. Can anyone help me with the checkstyle problem? I don't know what it 
means. I tested the patch in my laptop. The same error. I thought it might be 
something wrong with my laptop..However...{quote}

Hadoop indentation uses 2 spaces per level and {{switch}} and {{case}} 
statements should be at the same level. The whole block you edited has an 
incorrect indentation since the case statements are indented. Your changed line 
ends up being indented two more spaces beyond the case statement.

e.g. take a look at {{checkHaStateChange}} in the same file.

To fix this, ignore the checkstyle error as your net change in style issues is 
0. Or fix the whole switch statement. I can't speak on which would be better. 
I'm a fan of constant gardening; but some people might have pending patches 
which could be in conflict with a largish whitespace change (shirly, patch 
tools can manage this...)

> System.err should be System.out
> ---
>
> Key: HDFS-11232
> URL: https://issues.apache.org/jira/browse/HDFS-11232
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Ethan Li
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11232.001.patch
>
>
> In 
> /Users/Ethan/Worksplace/IntelliJWorkspace/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java,
>  System.err.println("Generating new cluster id:"); is used. I think it should 
> be System.out.println(...) since this is not an error message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy

2016-12-12 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11193:

Attachment: HDFS-11193-HDFS-10285-02.patch

> [SPS]: Erasure coded files should be considered for satisfying storage policy
> -
>
> Key: HDFS-11193
> URL: https://issues.apache.org/jira/browse/HDFS-11193
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11193-HDFS-10285-00.patch, 
> HDFS-11193-HDFS-10285-01.patch, HDFS-11193-HDFS-10285-02.patch
>
>
> Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. 
> {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider 
> all immediate files under that directory and need to check that, the files 
> really matching with namespace storage policy. All the mismatched striped 
> blocks should be chosen for block movement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11123) [SPS] Make storage policy satisfier daemon work on/off dynamically

2016-12-12 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15742018#comment-15742018
 ] 

Wei Zhou commented on HDFS-11123:
-

Hi [~umamaheswararao], thanks for the contribution! After looking at your 
patch, I think that [HDFS-11186 
|https://issues.apache.org/jira/browse/HDFS-11186] should be done based on this 
JIRA, as Active NN and Standby NN can transit to each other dynamically, so 
it's better to have this as the base. What's your opinion? 

> [SPS] Make storage policy satisfier daemon work on/off dynamically
> --
>
> Key: HDFS-11123
> URL: https://issues.apache.org/jira/browse/HDFS-11123
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-HDFS-11123.00.patch, 
> HDFS-11123-HDFS-10285-00.patch
>
>
> The idea of this task is to make SPS daemon thread to start/stop dynamically 
> in Namenode process with out needing to restart complete Namenode.
> So, this will help in the case of admin wants to switch of this SPS and wants 
> to run Mover tool externally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11226) Storagepolicy command is not working with "-fs" option

2016-12-12 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11226:

Attachment: HDFS-11226-003.patch

Uploaded the patch to fix same in {{cacheadmin}} and {{cryptoadmin}},since it's 
related,I handled as part of this jira only..May be I can update the jira 
summary..

> Storagepolicy command is not working with "-fs" option
> --
>
> Key: HDFS-11226
> URL: https://issues.apache.org/jira/browse/HDFS-11226
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-11126.patch, HDFS-11226-002.patch, 
> HDFS-11226-003.patch
>
>
> When StoragePolicy cmd is used with -fs option -- 
> Following Error is thrown --
>  {color: red} hdfs storagepolicies -fs hdfs://hacluster -listPolicies
> Can't understand command '-fs' {color}
> Usage: bin/hdfs storagepolicies 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11195) When appending files by webhdfs rest api fails, it returns 200

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741794#comment-15741794
 ] 

Hadoop QA commented on HDFS-11195:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
|   | hadoop.hdfs.TestErasureCodeBenchmarkThroughput |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11195 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842777/HDFS-11195.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 728b7e8b1cf1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4c38f11 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17835/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17835/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17835/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> When appending files by webhdfs rest api fails, it returns 200
> ---

[jira] [Commented] (HDFS-11226) Storagepolicy command is not working with "-fs" option

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741796#comment-15741796
 ] 

Hadoop QA commented on HDFS-11226:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11226 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842783/HDFS-11226-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dbe88ba50de1 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4c38f11 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17836/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17836/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17836/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Storagepolicy command is not working with "-fs" option
> --
>
> Key: HDFS-11226
> URL: https://issues.apache.org/jira/browse/HDFS-11226
>

[jira] [Commented] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741789#comment-15741789
 ] 

Hadoop QA commented on HDFS-8411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 74 unchanged - 0 fixed = 76 total (was 74) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
|   | hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestFileAppend3 |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-8411 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842778/HDFS-8411-012.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux be57f809764e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4c38f11 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17834/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17834/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17834/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17834/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatic

[jira] [Comment Edited] (HDFS-11072) Add ability to unset and change directory EC policy

2016-12-12 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741637#comment-15741637
 ] 

SammiChen edited comment on HDFS-11072 at 12/12/16 11:18 AM:
-

Hi Andrew, thanks for sharing your thoughts. Talking about redo policy on an 
directory tree, even we provide user knowledge about whether the policy is 
inherited or not, user still need to go through the tree to undo the policy one 
by one. Because the sub directory can have its own policy by overriding parent 
directory's policy. Unless we have feature like "replace all child with this 
directory's policy" which is not feasible in distributed environment. For 
distcp, how about add a option to explicitly reserve inherited policy(erasure 
coding policy or storage policy). Just a thought, I'm not sure if this will 
introduce massive complexity into distcp's implementation. 

I'm glad you also like the idea to introduce a new API. So, for erasure coding 
policy, there will be 4 API. 
1. setErasureCodingPolicy  
set ec policy on directory
2. removeErasureCodingPolicy
remove policy(ec or replication) on directory, after removal, directory 
will back to inheriting from parent directory (word "remove" is used more often 
in DistributedFileSystem API name than unset)
3. setDefaultReplicationPolicy
set replication on directory. This is only useful when user wants the 
directory from stop inheriting from it's parent's ec policy. 
4. getErasureCodingPolicy
return the policy set by setErasureCodingPolicy

But even introduce a new API to handle replication case, it's still kind of 
complicated. The complexity is introduced by the "replication" policy. From my 
limited knowledge, ec is suggested for cold data, and replication is suggested 
for hot data. Set replication on a sub directory under a parent ec directory is 
useful in cases that the cold data back to hot again, right? But I don't know 
how often is this scenario, and is it worthy to introduce the complexity to 
handle the case. So another idea in my mind is we can introduce "replication" 
policy later until we know it's very useful to end user. 

Anyway, I'm OK with the 4 API solution. Just want to make sure we are at the 
same page before I start to refine the patch. 



was (Author: sammi):
Hi Andrew, thanks for sharing your thoughts. Talking about redo policy on an 
directory tree, even we provide user knowledge about whether the policy is 
inherited or not, user still need to go through the tree to undo the policy one 
by one. Because the sub directory can have its own policy by overriding parent 
directory's policy. Unless we have feature like "replace all child with this 
directory's policy" which is not feasible in distributed environment. For 
distcp, how about add a option to explicitly reserve inherited policy(erasure 
coding policy or storage policy). Just a thought, I'm not sure if this will 
introduce massive complexity into distcp's implementation. 

I'm glad you also like the idea to introduce a new API. So, for erasure coding 
policy, there will be 4 API. 
1. setErasureCodingPolicy  
set ec policy on directory
2. removeErasureCodingPolicy
remove policy(ec or replication) on directory, after removal, directory 
will back to inheriting from parent directory (word "remove" is used more often 
in DistributedFileSystem API name than unset)
3. setDefaultReplicationPolicy
set replication on directory. This is only useful when user wants the 
directory from stop inheriting from it's parent's ec policy. 
4. getErasureCodingPolicy
return the policy set by setErasureCodingPolicy

But even introduce a new API to handle replication case, it's still kind of 
complicated. The complexity is introduced by the "replication" policy. From my 
limited knowledge, ec is suggested for cold data, and replication is suggested 
for hot data. Set replication on a sub directory under a parent ec directory is 
useful in cases that the cold data back to hot again, right? But I don't know 
how often is this scenario, and is it worthy to introduce the complexity to 
handle the case. 

Anyway, I'm OK with the 4 API solution. Just want to make sure we are at the 
same page before I start to refine the patch. 


> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch
>
>
> Since the directory-level EC policy simply applies to files 

[jira] [Updated] (HDFS-11226) Storagepolicy command is not working with "-fs" option

2016-12-12 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11226:

Attachment: HDFS-11226-002.patch

Updated patch for help message.

> Storagepolicy command is not working with "-fs" option
> --
>
> Key: HDFS-11226
> URL: https://issues.apache.org/jira/browse/HDFS-11226
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HDFS-11126.patch, HDFS-11226-002.patch
>
>
> When StoragePolicy cmd is used with -fs option -- 
> Following Error is thrown --
>  {color: red} hdfs storagepolicies -fs hdfs://hacluster -listPolicies
> Can't understand command '-fs' {color}
> Usage: bin/hdfs storagepolicies 
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11072) Add ability to unset and change directory EC policy

2016-12-12 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741637#comment-15741637
 ] 

SammiChen edited comment on HDFS-11072 at 12/12/16 11:09 AM:
-

Hi Andrew, thanks for sharing your thoughts. Talking about redo policy on an 
directory tree, even we provide user knowledge about whether the policy is 
inherited or not, user still need to go through the tree to undo the policy one 
by one. Because the sub directory can have its own policy by overriding parent 
directory's policy. Unless we have feature like "replace all child with this 
directory's policy" which is not feasible in distributed environment. For 
distcp, how about add a option to explicitly reserve inherited policy(erasure 
coding policy or storage policy). Just a thought, I'm not sure if this will 
introduce massive complexity into distcp's implementation. 

I'm glad you also like the idea to introduce a new API. So, for erasure coding 
policy, there will be 4 API. 
1. setErasureCodingPolicy  
set ec policy on directory
2. removeErasureCodingPolicy
remove policy(ec or replication) on directory, after removal, directory 
will back to inheriting from parent directory (word "remove" is used more often 
in DistributedFileSystem API name than unset)
3. setDefaultReplicationPolicy
set replication on directory. This is only useful when user wants the 
directory from stop inheriting from it's parent's ec policy. 
4. getErasureCodingPolicy
return the policy set by setErasureCodingPolicy

But even introduce a new API to handle replication case, it's still kind of 
complicated. The complexity is introduced by the "replication" policy. From my 
limited knowledge, ec is suggested for cold data, and replication is suggested 
for hot data. Set replication on a sub directory under a parent ec directory is 
useful in cases that the cold data back to hot again, right? But I don't know 
how often is this scenario, and is it worthy to introduce the complexity to 
handle the case. 

Anyway, I'm OK with the 4 API solution. Just want to make sure we are at the 
same page before I start to refine the patch. 



was (Author: sammi):
Hi Andrew, thanks for sharing your thoughts. Talking about redo policy on an 
directory tree, even we provide user knowledge about whether the policy is 
inherited or not, user still need to go through the tree to undo the policy one 
by one. Because the sub directory can have its own policy by overriding parent 
directory's policy. Unless we have feature like "replace all child with this 
directory's policy" which is not feasible in distributed environment. For 
distcp, how about add a option to explicitly reserve inherited policy(erasure 
coding policy or storage policy). Just a thought, I'm not sure if this will 
introduce massive complexity into distcp's implementation. 

I'm glad you also like the idea to introduce a new API. So, for erasure coding 
policy, there will be 4 API. 
1. setErasureCodingPolicy  
set ec policy on directory
2. removeErasureCodingPolicy
remove policy(ec or replication) on directory, after removal, directory 
will back to inheriting from parent directory (word "remove" is used more often 
in DistributedFileSystem API name)
3. setDefaultReplicationPolicy
set replication on directory. This is only useful when user wants the 
directory from stop inheriting from it's parent's ec policy. 
4. getErasureCodingPolicy
return the policy set by setErasureCodingPolicy

But even introduce a new API to handle replication case, it's still kind of 
complicated. The complexity is introduced by the "replication" policy. From my 
limited knowledge, ec is suggested for cold data, and replication is suggested 
for hot data. Set replication on a sub directory under a parent ec directory is 
useful in cases that the cold data back to hot again, right? But I don't know 
how often is this scenario, and is it worthy to introduce the complexity to 
handle the case. 

Anyway, I'm OK with the 4 API solution. Just want to make sure we are at the 
same page before I start to refine the patch. 


> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.



-

[jira] [Comment Edited] (HDFS-11072) Add ability to unset and change directory EC policy

2016-12-12 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741637#comment-15741637
 ] 

SammiChen edited comment on HDFS-11072 at 12/12/16 11:08 AM:
-

Hi Andrew, thanks for sharing your thoughts. Talking about redo policy on an 
directory tree, even we provide user knowledge about whether the policy is 
inherited or not, user still need to go through the tree to undo the policy one 
by one. Because the sub directory can have its own policy by overriding parent 
directory's policy. Unless we have feature like "replace all child with this 
directory's policy" which is not feasible in distributed environment. For 
distcp, how about add a option to explicitly reserve inherited policy(erasure 
coding policy or storage policy). Just a thought, I'm not sure if this will 
introduce massive complexity into distcp's implementation. 

I'm glad you also like the idea to introduce a new API. So, for erasure coding 
policy, there will be 4 API. 
1. setErasureCodingPolicy  
set ec policy on directory
2. removeErasureCodingPolicy
remove policy(ec or replication) on directory, after removal, directory 
will back to inheriting from parent directory (word "remove" is used more often 
in DistributedFileSystem API name)
3. setDefaultReplicationPolicy
set replication on directory. This is only useful when user wants the 
directory from stop inheriting from it's parent's ec policy. 
4. getErasureCodingPolicy
return the policy set by setErasureCodingPolicy

But even introduce a new API to handle replication case, it's still kind of 
complicated. The complexity is introduced by the "replication" policy. From my 
limited knowledge, ec is suggested for cold data, and replication is suggested 
for hot data. Set replication on a sub directory under a parent ec directory is 
useful in cases that the cold data back to hot again, right? But I don't know 
how often is this scenario, and is it worthy to introduce the complexity to 
handle the case. 

Anyway, I'm OK with the 4 API solution. Just want to make sure we are at the 
same page before I start to refine the patch. 



was (Author: sammi):
Hi Andrew, thanks for sharing your thoughts. Talking about redo policy on an 
directory tree, even we provide user knowledge about whether the policy is 
inherited or not, user still need to go through the tree to undo the policy one 
by one. Because the sub directory can have its own policy by overriding parent 
directory's policy. Unless we have feature like "replace all child with this 
directory's policy" which is not feasible in distributed environment. For 
distcp, how about add a option to explicitly reserve inherited policy(erasure 
coding policy or storage policy). Just a thought, I'm not sure if this will 
introduce massive complexity into distcp's implementation. 

I'm glad you also like the idea to introduce a new API. So, for erasure coding 
policy, there will be 4 API. 
1. setErasureCodingPolicy   set ec policy on directory
2. removeErasureCodingPolicyremove policy(ec or replication) on 
directory, after removal, directory will back to inheriting from parent 
directory (word "remove" is used more often in DistributedFileSystem API name)
3. setDefaultReplicationPolicy  set replication on directory. This is only 
useful when user wants the directory from stop inheriting from it's parent's ec 
policy. 
4. getErasureCodingPolicy   return the policy set by 
setErasureCodingPolicy

But even introduce a new API to handle replication case, it's still kind of 
complicated. The complexity is introduced by the "replication" policy. From my 
limited knowledge, ec is suggested for cold data, and replication is suggested 
for hot data. Set replication on a sub directory under a parent ec directory is 
useful in cases that the cold data back to hot again, right? But I don't know 
how often is this scenario, and is it worthy to introduce the complexity to 
handle the case. 

Anyway, I'm OK with the 4 API solution. Just want to make sure we are at the 
same page before I start to refine the patch. 


> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.




[jira] [Commented] (HDFS-11072) Add ability to unset and change directory EC policy

2016-12-12 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741637#comment-15741637
 ] 

SammiChen commented on HDFS-11072:
--

Hi Andrew, thanks for sharing your thoughts. Talking about redo policy on an 
directory tree, even we provide user knowledge about whether the policy is 
inherited or not, user still need to go through the tree to undo the policy one 
by one. Because the sub directory can have its own policy by overriding parent 
directory's policy. Unless we have feature like "replace all child with this 
directory's policy" which is not feasible in distributed environment. For 
distcp, how about add a option to explicitly reserve inherited policy(erasure 
coding policy or storage policy). Just a thought, I'm not sure if this will 
introduce massive complexity into distcp's implementation. 

I'm glad you also like the idea to introduce a new API. So, for erasure coding 
policy, there will be 4 API. 
1. setErasureCodingPolicy   set ec policy on directory
2. removeErasureCodingPolicyremove policy(ec or replication) on 
directory, after removal, directory will back to inheriting from parent 
directory (word "remove" is used more often in DistributedFileSystem API name)
3. setDefaultReplicationPolicy  set replication on directory. This is only 
useful when user wants the directory from stop inheriting from it's parent's ec 
policy. 
4. getErasureCodingPolicy   return the policy set by 
setErasureCodingPolicy

But even introduce a new API to handle replication case, it's still kind of 
complicated. The complexity is introduced by the "replication" policy. From my 
limited knowledge, ec is suggested for cold data, and replication is suggested 
for hot data. Set replication on a sub directory under a parent ec directory is 
useful in cases that the cold data back to hot again, right? But I don't know 
how often is this scenario, and is it worthy to introduce the complexity to 
handle the case. 

Anyway, I'm OK with the 4 API solution. Just want to make sure we are at the 
same page before I start to refine the patch. 


> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-12-12 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-8411:

Attachment: HDFS-8411-012.patch

Improve the test case

> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch, 
> HDFS-8411-006.patch, HDFS-8411-007.patch, HDFS-8411-008.patch, 
> HDFS-8411-009.patch, HDFS-8411-011.patch, HDFS-8411-012.patch, 
> HDFS-8411.010.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11195) When appending files by webhdfs rest api fails, it returns 200

2016-12-12 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11195:
--
Attachment: HDFS-11195.002.patch

upload v2 patch for this issue.

> When appending files by webhdfs rest api fails, it returns 200
> --
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11195.001.patch, HDFS-11195.002.patch
>
>
> Suppose that there is a Hadoop cluster contains only one datanode, and 
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T  
> "http://:/webhdfs/v1/?op=APPEND"
> {code}
> it returns 200, even though append operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11236) Erasure Coding cann't support appendToFile

2016-12-12 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741502#comment-15741502
 ] 

Yuanbo Liu commented on HDFS-11236:
---

[~gehaijiang] Thanks for filing this JIRA.
To my knowledge, I don't think it's convenient for EC to support append 
operation,
since the block of EC file is divided into several cells which are distributed 
to different nodes. 
I'd prefer marking this JIRA as "Won't fix"(If I'm wrong, please let me know)

> Erasure Coding cann't support appendToFile
> --
>
> Key: HDFS-11236
> URL: https://issues.apache.org/jira/browse/HDFS-11236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: gehaijiang
>
> hadoop 3.0.0-alpha1
> $  hdfs erasurecode -getPolicy /ectest/workers
> ErasureCodingPolicy=[Name=RS-DEFAULT-6-3-64k, 
> Schema=[ECSchema=[Codec=rs-default, numDataUnits=6, numParityUnits=3]], 
> CellSize=65536 ]
> $  hadoop fs  -appendToFile  hadoop/etc/hadoop/httpfs-env.sh  /ectest/workers
> appendToFile: Cannot append to files with striped block /ectest/workers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11236) Erasure Coding cann't support appendToFile

2016-12-12 Thread gehaijiang (JIRA)
gehaijiang created HDFS-11236:
-

 Summary: Erasure Coding cann't support appendToFile
 Key: HDFS-11236
 URL: https://issues.apache.org/jira/browse/HDFS-11236
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: gehaijiang


hadoop 3.0.0-alpha1

$  hdfs erasurecode -getPolicy /ectest/workers
ErasureCodingPolicy=[Name=RS-DEFAULT-6-3-64k, 
Schema=[ECSchema=[Codec=rs-default, numDataUnits=6, numParityUnits=3]], 
CellSize=65536 ]

$  hadoop fs  -appendToFile  hadoop/etc/hadoop/httpfs-env.sh  /ectest/workers
appendToFile: Cannot append to files with striped block /ectest/workers




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-12-12 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15741304#comment-15741304
 ] 

SammiChen commented on HDFS-8411:
-

Thanks Kai for reviewing the patch! I have uploaded new patch to address 2,3 
and 4. For 1, it seems the {{bytesRead}} doesn't need a cleanup. 



> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch, 
> HDFS-8411-006.patch, HDFS-8411-007.patch, HDFS-8411-008.patch, 
> HDFS-8411-009.patch, HDFS-8411-011.patch, HDFS-8411.010.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-12-12 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-8411:

Attachment: HDFS-8411-011.patch

> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch, 
> HDFS-8411-006.patch, HDFS-8411-007.patch, HDFS-8411-008.patch, 
> HDFS-8411-009.patch, HDFS-8411-011.patch, HDFS-8411.010.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org