[jira] [Updated] (HDFS-10630) Federation State Store FS Implementation

2017-04-06 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10630:
---
Attachment: HDFS-10630-HDFS-10467-004.patch

Fixing compilation issue.

> Federation State Store FS Implementation
> 
>
> Key: HDFS-10630
> URL: https://issues.apache.org/jira/browse/HDFS-10630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10630.001.patch, HDFS-10630.002.patch, 
> HDFS-10630-HDFS-10467-003.patch, HDFS-10630-HDFS-10467-004.patch
>
>
> Interface to store the federation shared state across Routers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10630) Federation State Store FS Implementation

2017-04-06 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10630:
---
Attachment: (was: HDFS-10630-HDFS-10467-004.patch)

> Federation State Store FS Implementation
> 
>
> Key: HDFS-10630
> URL: https://issues.apache.org/jira/browse/HDFS-10630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10630.001.patch, HDFS-10630.002.patch, 
> HDFS-10630-HDFS-10467-003.patch, HDFS-10630-HDFS-10467-004.patch
>
>
> Interface to store the federation shared state across Routers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11502) dn.js set datanode UI to window.location.hostname, it should use jmx bean property to setup hostname

2017-04-06 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11502:
--
Attachment: HDFS-11502.004.patch

Attach v4 patch for this JIRA. Fixed some minor issues.

> dn.js set datanode UI to window.location.hostname, it should use jmx bean 
> property to setup hostname
> 
>
> Key: HDFS-11502
> URL: https://issues.apache.org/jira/browse/HDFS-11502
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2, 2.7.3
> Environment: all
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
> Attachments: HDFS-11502.001.patch, HDFS-11502.002.patch, 
> HDFS-11502.003.patch, HDFS-11502.004.patch
>
>
> Datanode UI calls "dn.js" which loads properties for datanode.  "dn.js" sets 
> "data.dn.HostName" datanode UI to "window.location.hostname", it should use a 
> datanode property from jmx beans or an appropriate property. The issue is
> that if we use a proxy to access datanode UI we would show proxy hostanme 
> instead of actual datanode hostname.
> I am proposing using Hadoop:service=DataNode,name=JvmMetrics" tag.Hostname 
> field to do that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10630) Federation State Store FS Implementation

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960213#comment-15960213
 ] 

Hadoop QA commented on HDFS-10630:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-10467 passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 402 unchanged - 0 fixed = 408 total (was 402) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10630 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862425/HDFS-10630-HDFS-10467-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aee908804448 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 0e4661f |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19005/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19005/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19005/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19005/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19005/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 

[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960203#comment-15960203
 ] 

Hadoop QA commented on HDFS-10999:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 42s{color} 
| {color:red} hadoop-hdfs-project generated 37 new + 56 unchanged - 0 fixed = 
93 total (was 56) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
1022 unchanged - 14 fixed = 1025 total (was 1036) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.namenode.TestMetaSave |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10999 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862416/HDFS-10999.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 2b2df42e9ef9 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e7167e4 |
| 

[jira] [Updated] (HDFS-10882) Federation State Store Interface API

2017-04-06 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10882:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10467
   Status: Resolved  (was: Patch Available)

Thanks [~jakace] for working on this and [~chris.douglas] and [~subru] for the 
review.

> Federation State Store Interface API
> 
>
> Key: HDFS-10882
> URL: https://issues.apache.org/jira/browse/HDFS-10882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Fix For: HDFS-10467
>
> Attachments: HDFS-10882-HDFS-10467-001.patch, 
> HDFS-10882-HDFS-10467-002.patch, HDFS-10882-HDFS-10467-003.patch, 
> HDFS-10882-HDFS-10467-004.patch, HDFS-10882-HDFS-10467-005.patch, 
> HDFS-10882-HDFS-10467-006.patch
>
>
> The minimal classes and interfaces required to create state store internal 
> data APIs using protobuf serialization.  This is a pre-requisite for higher 
> level APIs such as the registration API and the mount table API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10630) Federation State Store FS Implementation

2017-04-06 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10630:
---
Attachment: HDFS-10630-HDFS-10467-004.patch

Updated to the latest HDFS-10467.

> Federation State Store FS Implementation
> 
>
> Key: HDFS-10630
> URL: https://issues.apache.org/jira/browse/HDFS-10630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10630.001.patch, HDFS-10630.002.patch, 
> HDFS-10630-HDFS-10467-003.patch, HDFS-10630-HDFS-10467-004.patch
>
>
> Interface to store the federation shared state across Routers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11569) Ozone: Implement listKey function for KeyManager

2017-04-06 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11569:
---
Attachment: HDFS-11569-HDFS-7240.007.patch

> Ozone: Implement listKey function for KeyManager
> 
>
> Key: HDFS-11569
> URL: https://issues.apache.org/jira/browse/HDFS-11569
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11569-HDFS-7240.001.patch, 
> HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, 
> HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch, 
> HDFS-11569-HDFS-7240.006.patch, HDFS-11569-HDFS-7240.007.patch
>
>
> List keys by prefix from a container. This will need to support pagination 
> for the purpose of small object support. So the listKey function returns 
> something like ListKeyResult, client can iterate the object to get pagination 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10882) Federation State Store Interface API

2017-04-06 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960178#comment-15960178
 ] 

Inigo Goiri commented on HDFS-10882:


The failed unit test seems unrelated.
Following JIRAs will include unit tests for this part too.
If nobody has any objection, I'll commit 006 in a couple hours.

> Federation State Store Interface API
> 
>
> Key: HDFS-10882
> URL: https://issues.apache.org/jira/browse/HDFS-10882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10882-HDFS-10467-001.patch, 
> HDFS-10882-HDFS-10467-002.patch, HDFS-10882-HDFS-10467-003.patch, 
> HDFS-10882-HDFS-10467-004.patch, HDFS-10882-HDFS-10467-005.patch, 
> HDFS-10882-HDFS-10467-006.patch
>
>
> The minimal classes and interfaces required to create state store internal 
> data APIs using protobuf serialization.  This is a pre-requisite for higher 
> level APIs such as the registration API and the mount table API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-04-06 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reassigned HDFS-11580:


Assignee: Yiqun Lin

> Ozone: Support asynchronus client API for SCM and containers
> 
>
> Key: HDFS-11580
> URL: https://issues.apache.org/jira/browse/HDFS-11580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
>
> This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
> form.
> For containers -- or the datanode API currently supports a call 
> {{sendCommand}}. we need to build proper programming interface and support an 
> async interface.
> There is also a set of SCM API that clients can call, it would be nice to 
> support Async interface for those too.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-04-06 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960174#comment-15960174
 ] 

Yiqun Lin commented on HDFS-11580:
--

HI [~anu], I'd like to work on this. I have taken a quick look on this. I am 
still having something to confirm.

* If we use async interface to send request command, that means we don't return 
the response to the client, right? And we will not do the 
{{validateContainerResponse}} operation. As I see the current logic, it will do 
the response checking.
* SCM API will also invoke {{XceiverClient#sendCommand}}, so is that mean I 
should only refactor the sendCommand? Is that anything else should I do for SCM 
API?

Please just let me know if I am not correct. Thanks!

> Ozone: Support asynchronus client API for SCM and containers
> 
>
> Key: HDFS-11580
> URL: https://issues.apache.org/jira/browse/HDFS-11580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>
> This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
> form.
> For containers -- or the datanode API currently supports a call 
> {{sendCommand}}. we need to build proper programming interface and support an 
> async interface.
> There is also a set of SCM API that clients can call, it would be nice to 
> support Async interface for those too.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10882) Federation State Store Interface API

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960168#comment-15960168
 ] 

Hadoop QA commented on HDFS-10882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
16s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10882 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862407/HDFS-10882-HDFS-10467-006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2ddaa555ef4d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / ac3bd27 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19002/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19002/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19002/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation State Store Interface API
> 

[jira] [Commented] (HDFS-11622) TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is lost

2017-04-06 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960131#comment-15960131
 ] 

Karan Mehta commented on HDFS-11622:


bq. As you pointed out, the current dataStreamer span in branch-2.7 seems not 
to have situation setting multiple parents. It looks like extention for future 
use?
Looks like an extension, since branch-2.7 depends on HTrace-3.1 which doesn't 
have that API. I had a quick look at the code, while creating a new 
{{MilliSpan}}, the constructor can take only 1 parent as the input. Only if 
explicit spans are built using {{Builder}} class inside the {{MilliSpan}} class 
they can be assigned an array of parents. At this point, the usages of this 
builder class also have provided only single parentId as input. According to my 
knowledge, for this particular branch we can go ahead by saving the traceId in 
the {{DFSPacket}} if its seems acceptable. Let me know your thoughts.

If you want, I can submit a patch for this one.

> TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is 
> lost
> --
>
> Key: HDFS-11622
> URL: https://issues.apache.org/jira/browse/HDFS-11622
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Reporter: Karan Mehta
>
> In the {{run()}} method of {{DataStreamer}} class, the following code is 
> written. {{parents\[0\]}} refer to the {{spanId}} of the parent span.
> {code}
>   one = dataQueue.getFirst(); // regular data packet
>   long parents[] = one.getTraceParents();
>   if (parents.length > 0) {
>  scope = Trace.startSpan("dataStreamer", new TraceInfo(0, 
> parents[0]));
> // TODO: use setParents API once it's available from HTrace 
> 3.2
> // scope = Trace.startSpan("dataStreamer", Sampler.ALWAYS);
> // scope.getSpan().setParents(parents);
>   }
> {code}
> The {{scope}} starts a new TraceSpan with a traceId hardcoded to 0. Ideally 
> it should be taken when {{currentPacket.addTraceParent(Trace.currentSpan())}} 
> is invoked. This JIRA is to propose an additional long field inside the 
> {{DFSPacket}} class which holds the parent {{traceId}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client

2017-04-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960120#comment-15960120
 ] 

Kai Zheng commented on HDFS-11623:
--

Thanks for your clarifying. The work looks good to me overall. I suggest 
[~jojochuang] also take a look since it's related to HDFS-11565?

> Move system erasure coding policies into hadoop-hdfs-client
> ---
>
> Key: HDFS-11623
> URL: https://issues.apache.org/jira/browse/HDFS-11623
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11623.001.patch, HDFS-11623.002.patch, 
> HDFS-11623.003.patch, HDFS-11623.004.patch
>
>
> This is a precursor to HDFS-11565. We need to move the set of system defined 
> EC policies out of the NameNode's ECPolicyManager into the hdfs-client module 
> so it can be referenced by the client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client

2017-04-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960113#comment-15960113
 ] 

Andrew Wang commented on HDFS-11623:


Aha, now I understand better your concern. I did an IDE refactor to move these 
methods to SystemErasureCodingPolicies. You're right that with user-defined 
policies, some of these changed places will change back to call an ECPManager 
getter that can return both system policies and user-defined policies.

As an aside, I checked all the {{getPolicy}} usage earlier when implementing 
enabling/disabling of EC system policies. We probably should do that again once 
user-defined policies are ready.

> Move system erasure coding policies into hadoop-hdfs-client
> ---
>
> Key: HDFS-11623
> URL: https://issues.apache.org/jira/browse/HDFS-11623
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11623.001.patch, HDFS-11623.002.patch, 
> HDFS-11623.003.patch, HDFS-11623.004.patch
>
>
> This is a precursor to HDFS-11565. We need to move the set of system defined 
> EC policies out of the NameNode's ECPolicyManager into the hdfs-client module 
> so it can be referenced by the client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-04-06 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10999:
--
Status: Patch Available  (was: In Progress)

> Introduce separate stats for Replicated and Erasure Coded Blocks apart from 
> the current Aggregated stats
> 
>
> Key: HDFS-10999
> URL: https://issues.apache.org/jira/browse/HDFS-10999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have, supportability
> Attachments: HDFS-10999.01.patch
>
>
> Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic 
> term "low redundancy" to the old-fashioned "under replicated". But this term 
> is still being used in messages in several places, such as web ui, dfsadmin 
> and fsck. We should probably change them to avoid confusion.
> File this jira to discuss it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-04-06 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10999:
--
Attachment: HDFS-10999.01.patch

Based on previous discussions, attaching v01 patch to address the following.
* {{ClientProtocol}} extended to support {{ReplicatedBlocksStats 
getReplicatedBlocksStats()}} and {{ECBlockGroupsStats getECBlockGroupsStats()}} 
apart from the current {{long[] getStats()}}
* Introduced new MBeans {{ECBlockGroupsStatsMBean}} and 
{{ReplicatedBlocksStatsMBean}} for the consumers like {{DFSAdmin}}, WebUI.
* {{FSNamesystemMBean}} will continue to carry aggregated stats combining both 
replicates and ec block stats. Since these are aggregated stats now, deprecated 
the methods to use proper naming.
* {{FSNamesystem}} now implements {{ECBlockGroupsStatsMBean}} and 
{{ReplicatedBlocksStatsMBean}} apart from the already implemented 
{{FSNamesystemMBean}}
* {{BlockMAnager}} changes to expose the stats specific to Replicated and EC 
Blocks
* {{LowRedundancyBlocks}}, {{CorruptReplicasMap}} and {{InvalidateBlocks}} 
updated to track Replicated and EC Blocks separately using LongAccumulators. 
Already existing aggregate blocks tracking size() methods are not altered for 
backward compatibility.
* {{PBHelperClient}}, {{ClientNamenodeProtocolTranslatorPB}}, 
{{ClientNamenodeProtocolServerSideTranslatorPB}} are updated to plumb in the 
new ClientProtocol services.
* {{ClientNameNodeProtocol.proto}} updated to define the new ClientProtocol 
services proto buf messages.
* {{TestNameNodeMetrics}}, {{TestUnderReplicatedBlocks}} are updated to verify 
the new stats. Several other tests are updated to verify the needed block 
counts.
* PS: {{DfsAdmin -report}} and WebUI are not updated to make use of the newer 
infrastructure. Probably after we finalize on this infra, I can take up the 
consumers separately in a new jira.

[~andrew.wang], [~tasanuma0829], [~jojochuang], can you please take a look at 
the attached patch ?

> Introduce separate stats for Replicated and Erasure Coded Blocks apart from 
> the current Aggregated stats
> 
>
> Key: HDFS-10999
> URL: https://issues.apache.org/jira/browse/HDFS-10999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have, supportability
> Attachments: HDFS-10999.01.patch
>
>
> Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic 
> term "low redundancy" to the old-fashioned "under replicated". But this term 
> is still being used in messages in several places, such as web ui, dfsadmin 
> and fsck. We should probably change them to avoid confusion.
> File this jira to discuss it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats

2017-04-06 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10999:
--
Summary: Introduce separate stats for Replicated and Erasure Coded Blocks 
apart from the current Aggregated stats  (was: Use more generic "low 
redundancy" blocks instead of "under replicated" blocks)

> Introduce separate stats for Replicated and Erasure Coded Blocks apart from 
> the current Aggregated stats
> 
>
> Key: HDFS-10999
> URL: https://issues.apache.org/jira/browse/HDFS-10999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-nice-to-have, supportability
>
> Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic 
> term "low redundancy" to the old-fashioned "under replicated". But this term 
> is still being used in messages in several places, such as web ui, dfsadmin 
> and fsck. We should probably change them to avoid confusion.
> File this jira to discuss it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus

2017-04-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960087#comment-15960087
 ] 

Andrew Wang commented on HDFS-11565:


Thanks for taking a look Wei-chiu! Like I said in my previous comment, I'd 
prefer deduping pluggable EC policies once we make more progress on that work. 
I can file a follow-on JIRA.

> Use compact identifiers for built-in ECPolicies in HdfsFileStatus
> -
>
> Key: HDFS-11565
> URL: https://issues.apache.org/jira/browse/HDFS-11565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11565.001.patch
>
>
> Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo:
> {quote}
> From looking at the protos, one other question I had is about the overhead of 
> these protos when using the hardcoded policies. There are a bunch of strings 
> and ints, which can be kind of heavy since they're added to each 
> HdfsFileStatus. Should we make the built-in ones identified by purely an ID, 
> with these fully specified protos used for the pluggable policies?
> {quote}
> {quote}
> Sounds like this could be considered separately because, either built-in 
> policies or plugged-in polices, the full meta info is maintained either by 
> the codes or in the fsimage persisted, so identifying them by purely an ID 
> should works fine. If agree, we could refactor the codes you mentioned above 
> separately.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11622) TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is lost

2017-04-06 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960078#comment-15960078
 ] 

Masatake Iwasaki commented on HDFS-11622:
-

bq. The DFSPacket initializes the parents field when it is dumping the data in 
dataQueue with the line packet.addTraceParent(Tracer.getCurrentSpanId()), thus 
getting current trace from the ThreadLocal. At this point, I feel that we can 
also get the value of trace ID and add the info inside the DFSPacket. Any 
thoughts on this one?

As you pointed out, the current {{dataStreamer}} span in branch-2.7 seems not 
to have situation setting multiple parents. It looks like extention for future 
use?


> TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is 
> lost
> --
>
> Key: HDFS-11622
> URL: https://issues.apache.org/jira/browse/HDFS-11622
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Reporter: Karan Mehta
>
> In the {{run()}} method of {{DataStreamer}} class, the following code is 
> written. {{parents\[0\]}} refer to the {{spanId}} of the parent span.
> {code}
>   one = dataQueue.getFirst(); // regular data packet
>   long parents[] = one.getTraceParents();
>   if (parents.length > 0) {
>  scope = Trace.startSpan("dataStreamer", new TraceInfo(0, 
> parents[0]));
> // TODO: use setParents API once it's available from HTrace 
> 3.2
> // scope = Trace.startSpan("dataStreamer", Sampler.ALWAYS);
> // scope.getSpan().setParents(parents);
>   }
> {code}
> The {{scope}} starts a new TraceSpan with a traceId hardcoded to 0. Ideally 
> it should be taken when {{currentPacket.addTraceParent(Trace.currentSpan())}} 
> is invoked. This JIRA is to propose an additional long field inside the 
> {{DFSPacket}} class which holds the parent {{traceId}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client

2017-04-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960077#comment-15960077
 ] 

Kai Zheng commented on HDFS-11623:
--

Hi Andrew,

Thanks for your update! I understand your point.

For changes like below, do we need to change it back when user defined policies 
are supported? It's OK for me to have changes like this, and we can later get 
back some of the removed methods like {{getPolicyByID}} for 
ErasureCodingPolicyManager when working on plugin-ed policies.
{code}
@@ -302,7 +303,7 @@ private static ErasureCodingPolicy 
getErasureCodingPolicyForPath(
 if (inode.isFile()) {
   byte id = inode.asFile().getErasureCodingPolicyID();
   return id < 0 ? null :
-  ErasureCodingPolicyManager.getPolicyByID(id);
+  SystemErasureCodingPolicies.getByID(id);
 }
{code}

> Move system erasure coding policies into hadoop-hdfs-client
> ---
>
> Key: HDFS-11623
> URL: https://issues.apache.org/jira/browse/HDFS-11623
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11623.001.patch, HDFS-11623.002.patch, 
> HDFS-11623.003.patch, HDFS-11623.004.patch
>
>
> This is a precursor to HDFS-11565. We need to move the set of system defined 
> EC policies out of the NameNode's ECPolicyManager into the hdfs-client module 
> so it can be referenced by the client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11622) TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is lost

2017-04-06 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960074#comment-15960074
 ] 

Masatake Iwasaki commented on HDFS-11622:
-

bq. I am unclear about the use of trace ID at this point if all of them can be 
easily traced via their parents span ID. Even in cases where the trace doesn't 
form any DAG and is a linearly growing span, the information can still be 
tracked via parent span ID.

That's right. In HTrace 4, there is no trace while part of span id is the 
equivalent.
https://github.com/apache/incubator-htrace/blob/4.2/htrace-core4/src/main/java/org/apache/htrace/core/SpanId.java#L30-L31

I think trace id is useful to effectively get relevant spans from whole spans 
space, before traversing parent-child relationship. If you use HBase or similar 
to store spans, you can co-locate relevant spans by using trace id as leading 
bits of rowkey.

> TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is 
> lost
> --
>
> Key: HDFS-11622
> URL: https://issues.apache.org/jira/browse/HDFS-11622
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Reporter: Karan Mehta
>
> In the {{run()}} method of {{DataStreamer}} class, the following code is 
> written. {{parents\[0\]}} refer to the {{spanId}} of the parent span.
> {code}
>   one = dataQueue.getFirst(); // regular data packet
>   long parents[] = one.getTraceParents();
>   if (parents.length > 0) {
>  scope = Trace.startSpan("dataStreamer", new TraceInfo(0, 
> parents[0]));
> // TODO: use setParents API once it's available from HTrace 
> 3.2
> // scope = Trace.startSpan("dataStreamer", Sampler.ALWAYS);
> // scope.getSpan().setParents(parents);
>   }
> {code}
> The {{scope}} starts a new TraceSpan with a traceId hardcoded to 0. Ideally 
> it should be taken when {{currentPacket.addTraceParent(Trace.currentSpan())}} 
> is invoked. This JIRA is to propose an additional long field inside the 
> {{DFSPacket}} class which holds the parent {{traceId}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960061#comment-15960061
 ] 

Hadoop QA commented on HDFS-11623:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
405 unchanged - 1 fixed = 407 total (was 406) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 4 new + 9 
unchanged - 0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
14s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11623 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862394/HDFS-11623.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6f2f29341612 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0eacd4c |
| Default Java | 

[jira] [Updated] (HDFS-10882) Federation State Store Interface API

2017-04-06 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10882:
---
Attachment: HDFS-10882-HDFS-10467-006.patch

Moving from {{RecordPBImpl}} to {{PBRecord}}.

> Federation State Store Interface API
> 
>
> Key: HDFS-10882
> URL: https://issues.apache.org/jira/browse/HDFS-10882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10882-HDFS-10467-001.patch, 
> HDFS-10882-HDFS-10467-002.patch, HDFS-10882-HDFS-10467-003.patch, 
> HDFS-10882-HDFS-10467-004.patch, HDFS-10882-HDFS-10467-005.patch, 
> HDFS-10882-HDFS-10467-006.patch
>
>
> The minimal classes and interfaces required to create state store internal 
> data APIs using protobuf serialization.  This is a pre-requisite for higher 
> level APIs such as the registration API and the mount table API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10882) Federation State Store Interface API

2017-04-06 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960021#comment-15960021
 ] 

Subru Krishnan commented on HDFS-10882:
---

Thanks [~elgoiri] for addressing my comments. I have only one minor comment - 
can you rename {{RecordPBImpl}} as it's an interface. 

> Federation State Store Interface API
> 
>
> Key: HDFS-10882
> URL: https://issues.apache.org/jira/browse/HDFS-10882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10882-HDFS-10467-001.patch, 
> HDFS-10882-HDFS-10467-002.patch, HDFS-10882-HDFS-10467-003.patch, 
> HDFS-10882-HDFS-10467-004.patch, HDFS-10882-HDFS-10467-005.patch
>
>
> The minimal classes and interfaces required to create state store internal 
> data APIs using protobuf serialization.  This is a pre-requisite for higher 
> level APIs such as the registration API and the mount table API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus

2017-04-06 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959995#comment-15959995
 ] 

Wei-Chiu Chuang edited comment on HDFS-11565 at 4/7/17 12:06 AM:
-

[~andrew.wang] thanks for working it. The patch itself looks reasonable. Let's 
review it after HDFS-11623 is checked in.

One issue I saw is 
{code}
+if (policy == null) {
+  return new ErasureCodingPolicy(proto.getName(),
+  convertECSchema(proto.getSchema()),
+  proto.getCellSize(), id);
+}
{code}
This means a new ErasureCodingPolicy object each time it is called. Shouldn't 
it be cached like SYSTEM_POLICIES_BY_NAME?


was (Author: jojochuang):
[~andrew.wang] thanks for working it. The patch itself looks reasonable. Let's 
review it after HDFS-11623 is checked in.

One issue I saw is 
{code}
+if (policy == null) {
+  return new ErasureCodingPolicy(proto.getName(),
+  convertECSchema(proto.getSchema()),
+  proto.getCellSize(), id);
+}
{code}
This means a new ErasureCodingPolicy object each time it is called. Shouldn't 
it be cached like SYSTEM_POLICIES_BY_ID?

> Use compact identifiers for built-in ECPolicies in HdfsFileStatus
> -
>
> Key: HDFS-11565
> URL: https://issues.apache.org/jira/browse/HDFS-11565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11565.001.patch
>
>
> Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo:
> {quote}
> From looking at the protos, one other question I had is about the overhead of 
> these protos when using the hardcoded policies. There are a bunch of strings 
> and ints, which can be kind of heavy since they're added to each 
> HdfsFileStatus. Should we make the built-in ones identified by purely an ID, 
> with these fully specified protos used for the pluggable policies?
> {quote}
> {quote}
> Sounds like this could be considered separately because, either built-in 
> policies or plugged-in polices, the full meta info is maintained either by 
> the codes or in the fsimage persisted, so identifying them by purely an ID 
> should works fine. If agree, we could refactor the codes you mentioned above 
> separately.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960010#comment-15960010
 ] 

Hudson commented on HDFS-11608:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11543 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11543/])
HDFS-11608. HDFS write crashed with block size greater than 2 GB. (xyao: rev 
0eacd4c13be9bad0fbed9421a6539c64bbda4df1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-06 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959997#comment-15959997
 ] 

Hanisha Koneru commented on HDFS-11630:
---

[~arpitagarwal], thanks for the review.

bq. Also instead of a single call, you can probably use a loop like you have in 
the other two test cases.

The FutureCallBack used in this test is mock FutureCallBack object. So, we 
would not have the exit condition for loop. The other option is to not use a 
mock FutureCallBack. 

bq. Another suggestion is to make the timeouts less aggressive. We can set it 
them to e.g. 120 seconds instead of 1 or 2 seconds to reduce spurious timeouts 
when testing on overloaded VMs.
Would 120 seconds not be too long to test for timeouts. Since we make the dummy 
checkable wait for the timeout, could we not have shorter timeouts (like 2 
seconds) and spurious timeouts would not have negative impact on the tests. 

What do you think?

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11630.001.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus

2017-04-06 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959995#comment-15959995
 ] 

Wei-Chiu Chuang commented on HDFS-11565:


[~andrew.wang] thanks for working it. The patch itself looks reasonable. Let's 
review it after HDFS-11623 is checked in.

One issue I saw is 
{code}
+if (policy == null) {
+  return new ErasureCodingPolicy(proto.getName(),
+  convertECSchema(proto.getSchema()),
+  proto.getCellSize(), id);
+}
{code}
This means a new ErasureCodingPolicy object each time it is called. Shouldn't 
it be cached like SYSTEM_POLICIES_BY_ID?

> Use compact identifiers for built-in ECPolicies in HdfsFileStatus
> -
>
> Key: HDFS-11565
> URL: https://issues.apache.org/jira/browse/HDFS-11565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11565.001.patch
>
>
> Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo:
> {quote}
> From looking at the protos, one other question I had is about the overhead of 
> these protos when using the hardcoded policies. There are a bunch of strings 
> and ints, which can be kind of heavy since they're added to each 
> HdfsFileStatus. Should we make the built-in ones identified by purely an ID, 
> with these fully specified protos used for the pluggable policies?
> {quote}
> {quote}
> Sounds like this could be considered separately because, either built-in 
> policies or plugged-in polices, the full meta info is maintained either by 
> the codes or in the fsimage persisted, so identifying them by purely an ID 
> should works fine. If agree, we could refactor the codes you mentioned above 
> separately.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959972#comment-15959972
 ] 

Arpit Agarwal commented on HDFS-11630:
--

Also instead of a single call, you can probably use a loop like you have in the 
other two test cases.

Another suggestion is to make the timeouts less aggressive. We can set it them 
to e.g. 120 seconds instead of 1 or 2 seconds to reduce spurious timeouts when 
testing on overloaded VMs.

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11630.001.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959969#comment-15959969
 ] 

Arpit Agarwal commented on HDFS-11630:
--

Thanks for fixing this [~hanishakoneru].

Should we also update line 164 in 
{{testDiskCheckTimeoutInvokesOneCallbackOnly}}?

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11630.001.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-06 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959956#comment-15959956
 ] 

Xiaoyu Yao edited comment on HDFS-11608 at 4/6/17 11:35 PM:


Thanks [~xiaobingo] for the contribution and all for the reviews and 
discussions. I commit the patch to trunk, branch-2 and branch-2.8. 

[~xiaobingo] can you help preparing a patch for branch-2.7 which has the same 
issue? 


was (Author: xyao):
Thanks [~xiaobingo] for the contribution and all for the reviews and 
discussions. I commit the patch to trunk and branch-2. 

Also, I suggest to backport this to branch-2.7 and branch-2.8 which have the 
same issue. 

> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11608:
--
Fix Version/s: 2.8.1

> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11608:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~xiaobingo] for the contribution and all for the reviews and 
discussions. I commit the patch to trunk and branch-2. 

Also, I suggest to backport this to branch-2.7 and branch-2.8 which have the 
same issue. 

> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDFS-11608:


Assignee: Xiaobing Zhou  (was: Xiaoyu Yao)

> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client

2017-04-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11623:
---
Attachment: HDFS-11623.004.patch

Forgot a closing parens...

> Move system erasure coding policies into hadoop-hdfs-client
> ---
>
> Key: HDFS-11623
> URL: https://issues.apache.org/jira/browse/HDFS-11623
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11623.001.patch, HDFS-11623.002.patch, 
> HDFS-11623.003.patch, HDFS-11623.004.patch
>
>
> This is a precursor to HDFS-11565. We need to move the set of system defined 
> EC policies out of the NameNode's ECPolicyManager into the hdfs-client module 
> so it can be referenced by the client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11608) HDFS write crashed with block size greater than 2 GB

2017-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11608:
--
Summary: HDFS write crashed with block size greater than 2 GB  (was: HDFS 
write crashed in the case of huge block size)

> HDFS write crashed with block size greater than 2 GB
> 
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size

2017-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11608:
--
Description: 
We've seen HDFS write crashes in the case of huge block size. For example, 
writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
out of memory exception. DataNode gives out IOException. After changing heap 
size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
Broken pipe and pipeline recovery.

Give below:
DN exception,
{noformat}
2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
java.io.IOException: Incorrect value for packet payload size: 2147483128
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
at java.lang.Thread.run(Thread.java:745)
{noformat}

  was:
We've seen HDFS write crashes in the case of huge block size. For example, 
writing a 3G file using 3G block size, HDFS client throws out of memory 
exception. DataNode gives out IOException. After changing heap size limit,  
DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe and 
pipeline recovery.

Give below:
DN exception,
{noformat}
2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
java.io.IOException: Incorrect value for packet payload size: 2147483128
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
at java.lang.Thread.run(Thread.java:745)
{noformat}


> HDFS write crashed in the case of huge block size
> -
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> 

[jira] [Assigned] (HDFS-11608) HDFS write crashed in the case of huge block size

2017-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-11608:
-

Assignee: Xiaoyu Yao  (was: Xiaobing Zhou)

> HDFS write crashed in the case of huge block size
> -
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws 
> out of memory exception. DataNode gives out IOException. After changing heap 
> size limit,  DFSOutputStream ResponseProcessor exception is seen followed by 
> Broken pipe and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size

2017-04-06 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959929#comment-15959929
 ] 

Xiaoyu Yao commented on HDFS-11608:
---

Thanks [~xiaobingo] for the update. +1 for the v003 patch. I will commit it 
shortly. 

The Jenkins failure seems unrelated to this change and does not repor on my 
local machine. 
I opened HDFS-11632 to track the flaky unit test issue. 


> HDFS write crashed in the case of huge block size
> -
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3G file using 3G block size, HDFS client throws out of memory 
> exception. DataNode gives out IOException. After changing heap size limit,  
> DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe 
> and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11632) TestCacheDirectives.testWaitForCachedReplicas failed intermittently

2017-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11632:
--
Summary: TestCacheDirectives.testWaitForCachedReplicas failed 
intermittently   (was: .TestCacheDirectives.testWaitForCachedReplicas failed 
intermittently )

> TestCacheDirectives.testWaitForCachedReplicas failed intermittently 
> 
>
> Key: HDFS-11632
> URL: https://issues.apache.org/jira/browse/HDFS-11632
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>
> This was found in recent Jenkins 
> [run|https://builds.apache.org/job/PreCommit-HDFS-Build/18997/testReport/org.apache.hadoop.hdfs.server.namenode/TestCacheDirectives/testWaitForCachedReplicas/]
> {code}
> Error Message
> expected:<16384> but was:<20480>
> Stacktrace
> java.lang.AssertionError: expected:<16384> but was:<20480>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testWaitForCachedReplicas(TestCacheDirectives.java:965)
> Standard Output
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11632) .TestCacheDirectives.testWaitForCachedReplicas failed intermittently

2017-04-06 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-11632:
-

 Summary: .TestCacheDirectives.testWaitForCachedReplicas failed 
intermittently 
 Key: HDFS-11632
 URL: https://issues.apache.org/jira/browse/HDFS-11632
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiaoyu Yao


This was found in recent Jenkins 
[run|https://builds.apache.org/job/PreCommit-HDFS-Build/18997/testReport/org.apache.hadoop.hdfs.server.namenode/TestCacheDirectives/testWaitForCachedReplicas/]

{code}
Error Message

expected:<16384> but was:<20480>
Stacktrace

java.lang.AssertionError: expected:<16384> but was:<20480>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testWaitForCachedReplicas(TestCacheDirectives.java:965)
Standard Output
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959925#comment-15959925
 ] 

Hadoop QA commented on HDFS-11623:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 56s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
6 unchanged - 400 fixed = 7 total (was 406) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
29s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11623 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862378/HDFS-11623.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4599aa49d192 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a49fac5 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19000/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/19000/artifact/patchprocess/patch-compile-hadoop-hdfs-project.txt
 |
| javac | 

[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959892#comment-15959892
 ] 

Hadoop QA commented on HDFS-11608:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11608 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862356/HDFS-11608.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7ecefc15c399 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a9439e |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18997/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18997/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18997/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   

[jira] [Work started] (HDFS-11631) Block Storage : allow cblock server to be started from hdfs command

2017-04-06 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11631 started by Chen Liang.
-
> Block Storage : allow cblock server to be started from hdfs command
> ---
>
> Key: HDFS-11631
> URL: https://issues.apache.org/jira/browse/HDFS-11631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11631-HDFS-7240.001.patch
>
>
> This JIRA adds CBlock main() method, also adds entry to hdfs script, such 
> that cblock server can be started by hdfs script and run as a daemon process.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11631) Block Storage : allow cblock server to be started from hdfs command

2017-04-06 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11631:
--
Attachment: HDFS-11631-HDFS-7240.001.patch

> Block Storage : allow cblock server to be started from hdfs command
> ---
>
> Key: HDFS-11631
> URL: https://issues.apache.org/jira/browse/HDFS-11631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11631-HDFS-7240.001.patch
>
>
> This JIRA adds CBlock main() method, also adds entry to hdfs script, such 
> that cblock server can be started by hdfs script and run as a daemon process.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11631) Block Storage : allow cblock server to be started from hdfs command

2017-04-06 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11631:
-

 Summary: Block Storage : allow cblock server to be started from 
hdfs command
 Key: HDFS-11631
 URL: https://issues.apache.org/jira/browse/HDFS-11631
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


This JIRA adds CBlock main() method, also adds entry to hdfs script, such that 
cblock server can be started by hdfs script and run as a daemon process.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11362) StorageDirectory should initialize a non-null default StorageDirType

2017-04-06 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959879#comment-15959879
 ] 

Hanisha Koneru commented on HDFS-11362:
---

Thank you [~xyao] for committing the patch.

> StorageDirectory should initialize a non-null default StorageDirType
> 
>
> Key: HDFS-11362
> URL: https://issues.apache.org/jira/browse/HDFS-11362
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11362.000.patch, HDFS-11362.001.patch
>
>
> _Storage#shouldReturnNextDir_ method checks if the next Storage directory is 
> of the same type us dirType.
> {noformat}
> private boolean shouldReturnNextDir() {
>   StorageDirectory sd = getStorageDir(nextIndex);
>   return (dirType == null || sd.getStorageDirType().isOfType(dirType)) &&
>   (includeShared || !sd.isShared());
> }
> {noformat}
> There is a possibility that sd.getStorageDirType() returns null (default 
> dirType is null). Hence, before checking for type match, we should make sure 
> that the value returned by sd.getStorageDirType() is not null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959872#comment-15959872
 ] 

Hadoop QA commented on HDFS-11558:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862362/HDFS-11558.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1d12ea59d9a1 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a9439e |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18998/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18998/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18998/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> 

[jira] [Commented] (HDFS-11362) StorageDirectory should initialize a non-null default StorageDirType

2017-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959864#comment-15959864
 ] 

Hudson commented on HDFS-11362:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11542 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11542/])
HDFS-11362. StorageDirectory should initialize a non-null default (xyao: rev 
a49fac5302128a6f5d971f5818d0fd874c3932e3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java


> StorageDirectory should initialize a non-null default StorageDirType
> 
>
> Key: HDFS-11362
> URL: https://issues.apache.org/jira/browse/HDFS-11362
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11362.000.patch, HDFS-11362.001.patch
>
>
> _Storage#shouldReturnNextDir_ method checks if the next Storage directory is 
> of the same type us dirType.
> {noformat}
> private boolean shouldReturnNextDir() {
>   StorageDirectory sd = getStorageDir(nextIndex);
>   return (dirType == null || sd.getStorageDirType().isOfType(dirType)) &&
>   (includeShared || !sd.isShared());
> }
> {noformat}
> There is a possibility that sd.getStorageDirType() returns null (default 
> dirType is null). Hence, before checking for type match, we should make sure 
> that the value returned by sd.getStorageDirType() is not null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client

2017-04-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11623:
---
Attachment: HDFS-11623.003.patch

New patch attached renaming to {{SystemErasureCodingPolicies}}.

The responsibilities you describe seem like they fit well within the existing 
ErasureCodingPolicyManager class since the set of policies is managed by the 
NN. Stripping out the built-in policies like this patch proposes helps 
modularize the code, and also clarifies the getters.

> Move system erasure coding policies into hadoop-hdfs-client
> ---
>
> Key: HDFS-11623
> URL: https://issues.apache.org/jira/browse/HDFS-11623
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-11623.001.patch, HDFS-11623.002.patch, 
> HDFS-11623.003.patch
>
>
> This is a precursor to HDFS-11565. We need to move the set of system defined 
> EC policies out of the NameNode's ECPolicyManager into the hdfs-client module 
> so it can be referenced by the client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10882) Federation State Store Interface API

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959843#comment-15959843
 ] 

Hadoop QA commented on HDFS-10882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 5s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10882 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862354/HDFS-10882-HDFS-10467-005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 038c27c4d3be 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / ac3bd27 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18996/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18996/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18996/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation State Store Interface API
> 

[jira] [Updated] (HDFS-11362) StorageDirectory should initialize a non-null default StorageDirType

2017-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11362:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~hanishakoneru] for the contribution and all for the reviews. I've 
commit the patch to trunk and branch-2.

> StorageDirectory should initialize a non-null default StorageDirType
> 
>
> Key: HDFS-11362
> URL: https://issues.apache.org/jira/browse/HDFS-11362
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HDFS-11362.000.patch, HDFS-11362.001.patch
>
>
> _Storage#shouldReturnNextDir_ method checks if the next Storage directory is 
> of the same type us dirType.
> {noformat}
> private boolean shouldReturnNextDir() {
>   StorageDirectory sd = getStorageDir(nextIndex);
>   return (dirType == null || sd.getStorageDirType().isOfType(dirType)) &&
>   (includeShared || !sd.isShared());
> }
> {noformat}
> There is a possibility that sd.getStorageDirType() returns null (default 
> dirType is null). Hence, before checking for type match, we should make sure 
> that the value returned by sd.getStorageDirType() is not null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11362) StorageDirectory should initialize a non-null default StorageDirType

2017-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11362:
--
Summary: StorageDirectory should initialize a non-null default 
StorageDirType  (was: Storage#shouldReturnNextDir should check for null dirType)

> StorageDirectory should initialize a non-null default StorageDirType
> 
>
> Key: HDFS-11362
> URL: https://issues.apache.org/jira/browse/HDFS-11362
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-11362.000.patch, HDFS-11362.001.patch
>
>
> _Storage#shouldReturnNextDir_ method checks if the next Storage directory is 
> of the same type us dirType.
> {noformat}
> private boolean shouldReturnNextDir() {
>   StorageDirectory sd = getStorageDir(nextIndex);
>   return (dirType == null || sd.getStorageDirType().isOfType(dirType)) &&
>   (includeShared || !sd.isShared());
> }
> {noformat}
> There is a possibility that sd.getStorageDirType() returns null (default 
> dirType is null). Hence, before checking for type match, we should make sure 
> that the value returned by sd.getStorageDirType() is not null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11362) Storage#shouldReturnNextDir should check for null dirType

2017-04-06 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959749#comment-15959749
 ] 

Xiaoyu Yao commented on HDFS-11362:
---

Thanks [~hanishakoneru] for the update. +1 for the v001 patch. I will commit it 
shortly.

We should refactor StorageDirectory with a Builder pattern to avoid so many 
different constructors. This can be handled in a separate JIRA.

> Storage#shouldReturnNextDir should check for null dirType
> -
>
> Key: HDFS-11362
> URL: https://issues.apache.org/jira/browse/HDFS-11362
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-11362.000.patch, HDFS-11362.001.patch
>
>
> _Storage#shouldReturnNextDir_ method checks if the next Storage directory is 
> of the same type us dirType.
> {noformat}
> private boolean shouldReturnNextDir() {
>   StorageDirectory sd = getStorageDir(nextIndex);
>   return (dirType == null || sd.getStorageDirType().isOfType(dirType)) &&
>   (includeShared || !sd.isShared());
> }
> {noformat}
> There is a possibility that sd.getStorageDirType() returns null (default 
> dirType is null). Hence, before checking for type match, we should make sure 
> that the value returned by sd.getStorageDirType() is not null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size

2017-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959739#comment-15959739
 ] 

Arpit Agarwal commented on HDFS-11608:
--

+1 for the v3 patch pending Jenkins. Thanks for adding the unit test Xiaobing!

> HDFS write crashed in the case of huge block size
> -
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3G file using 3G block size, HDFS client throws out of memory 
> exception. DataNode gives out IOException. After changing heap size limit,  
> DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe 
> and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11630:
--
Attachment: HDFS-11630.001.patch

The test was waiting for the same amount of time as the DiskCheckTimeout value 
before checking for callback. This created a possibility that the callback 
actually happens after the check. 
To avoid this,  we need to wait longer than the timeout before checking for 
callback.
Ran hundreds of iterations locally and they passed.

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11630.001.patch
>
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10630) Federation State Store FS Implementation

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959726#comment-15959726
 ] 

Hadoop QA commented on HDFS-10630:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 16s{color} 
| {color:red} HDFS-10630 does not apply to HDFS-10467. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10630 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829486/HDFS-10630-HDFS-10467-003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18999/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation State Store FS Implementation
> 
>
> Key: HDFS-10630
> URL: https://issues.apache.org/jira/browse/HDFS-10630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10630.001.patch, HDFS-10630.002.patch, 
> HDFS-10630-HDFS-10467-003.patch
>
>
> Interface to store the federation shared state across Routers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-06 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959721#comment-15959721
 ] 

Hanisha Koneru commented on HDFS-11630:
---

Thanks [~arpitagarwal] for reporting this.

> TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
> ---
>
> Key: HDFS-11630
> URL: https://issues.apache.org/jira/browse/HDFS-11630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>
> TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
> fails intermittently in Jenkins builds. 
> We need to wait for disk checker timeout to callback the 
> FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds

2017-04-06 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-11630:
-

 Summary: TestThrottledAsyncCheckerTimeout fails intermittently in 
Jenkins builds
 Key: HDFS-11630
 URL: https://issues.apache.org/jira/browse/HDFS-11630
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly 
fails intermittently in Jenkins builds. 
We need to wait for disk checker timeout to callback the 
FutureCallBack#onFailure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10630) File-based Federation State Store

2017-04-06 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10630:
---
Summary: File-based Federation State Store  (was: Federation State Store)

> File-based Federation State Store
> -
>
> Key: HDFS-10630
> URL: https://issues.apache.org/jira/browse/HDFS-10630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10630.001.patch, HDFS-10630.002.patch, 
> HDFS-10630-HDFS-10467-003.patch
>
>
> Interface to store the federation shared state across Routers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11609) Some blocks can be permanently lost if nodes are decommissioned while dead

2017-04-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959711#comment-15959711
 ] 

Junping Du commented on HDFS-11609:
---

Sounds like a blocker for 2.8.1. [~kihwal] and [~jojochuang], what do you guys 
think?

> Some blocks can be permanently lost if nodes are decommissioned while dead
> --
>
> Key: HDFS-11609
> URL: https://issues.apache.org/jira/browse/HDFS-11609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-11609.branch-2.patch, HDFS-11609.trunk.patch
>
>
> When all the nodes containing a replica of a block are decommissioned while 
> they are dead, they get decommissioned right away even if there are missing 
> blocks. This behavior was introduced by HDFS-7374.
> The problem starts when those decommissioned nodes are brought back online. 
> The namenode no longer shows missing blocks, which creates a false sense of 
> cluster health. When the decommissioned nodes are removed and reformatted, 
> the block data is permanently lost. The namenode will report missing blocks 
> after the heartbeat recheck interval (e.g. 10 minutes) from the moment the 
> last node is taken down.
> There are multiple issues in the code. As some cause different behaviors in 
> testing vs. production, it took a while to reproduce it in a unit test. I 
> will present analysis and proposal soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10630) Federation State Store FS Implementation

2017-04-06 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10630:
---
Summary: Federation State Store FS Implementation  (was: File-based 
Federation State Store)

> Federation State Store FS Implementation
> 
>
> Key: HDFS-10630
> URL: https://issues.apache.org/jira/browse/HDFS-10630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10630.001.patch, HDFS-10630.002.patch, 
> HDFS-10630-HDFS-10467-003.patch
>
>
> Interface to store the federation shared state across Routers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11609) Some blocks can be permanently lost if nodes are decommissioned while dead

2017-04-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11609:
--
Priority: Blocker  (was: Critical)

> Some blocks can be permanently lost if nodes are decommissioned while dead
> --
>
> Key: HDFS-11609
> URL: https://issues.apache.org/jira/browse/HDFS-11609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Attachments: HDFS-11609.branch-2.patch, HDFS-11609.trunk.patch
>
>
> When all the nodes containing a replica of a block are decommissioned while 
> they are dead, they get decommissioned right away even if there are missing 
> blocks. This behavior was introduced by HDFS-7374.
> The problem starts when those decommissioned nodes are brought back online. 
> The namenode no longer shows missing blocks, which creates a false sense of 
> cluster health. When the decommissioned nodes are removed and reformatted, 
> the block data is permanently lost. The namenode will report missing blocks 
> after the heartbeat recheck interval (e.g. 10 minutes) from the moment the 
> last node is taken down.
> There are multiple issues in the code. As some cause different behaviors in 
> testing vs. production, it took a while to reproduce it in a unit test. I 
> will present analysis and proposal soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long

2017-04-06 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11558:
-
Attachment: HDFS-11558.006.patch

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch, HDFS-11558.006.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-04-06 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959699#comment-15959699
 ] 

Xiaobing Zhou commented on HDFS-11558:
--

Cleared, thanks. Posted v6.

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch, HDFS-11558.006.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size

2017-04-06 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959663#comment-15959663
 ] 

Xiaobing Zhou commented on HDFS-11608:
--

Posted v3 with fix setting base dir for newly created cluster to avoid 
conflicts of shared root dir. This resolved the failure. Thanks [~vagarychen] 
for the check.

{code}
dfsConf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR,
  baseDir.getAbsolutePath());
{code}

> HDFS write crashed in the case of huge block size
> -
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3G file using 3G block size, HDFS client throws out of memory 
> exception. DataNode gives out IOException. After changing heap size limit,  
> DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe 
> and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size

2017-04-06 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11608:
-
Attachment: HDFS-11608.003.patch

> HDFS write crashed in the case of huge block size
> -
>
> Key: HDFS-11608
> URL: https://issues.apache.org/jira/browse/HDFS-11608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, 
> HDFS-11608.002.patch, HDFS-11608.003.patch
>
>
> We've seen HDFS write crashes in the case of huge block size. For example, 
> writing a 3G file using 3G block size, HDFS client throws out of memory 
> exception. DataNode gives out IOException. After changing heap size limit,  
> DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe 
> and pipeline recovery.
> Give below:
> DN exception,
> {noformat}
> 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /192.168.64.101:47167 dst: /192.168.64.101:50010
> java.io.IOException: Incorrect value for packet payload size: 2147483128
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10882) Federation State Store Interface API

2017-04-06 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10882:
---
Attachment: HDFS-10882-HDFS-10467-005.patch

Tackling comments from [~chris.douglas] and [~subru]:
* Cleaned {{StateStoreSerializer}}
* {{StateStoreSerializerPBImpl}} to use {{ReflectionUtils}}
* Moved the concept of required record to class of the record stored in the 
class
* Removed version from {{RecordStore}}
* Removed redundant operations from {{RecordStored}} and exposed them through 
the {{getDriver()}} directly

> Federation State Store Interface API
> 
>
> Key: HDFS-10882
> URL: https://issues.apache.org/jira/browse/HDFS-10882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10882-HDFS-10467-001.patch, 
> HDFS-10882-HDFS-10467-002.patch, HDFS-10882-HDFS-10467-003.patch, 
> HDFS-10882-HDFS-10467-004.patch, HDFS-10882-HDFS-10467-005.patch
>
>
> The minimal classes and interfaces required to create state store internal 
> data APIs using protobuf serialization.  This is a pre-requisite for higher 
> level APIs such as the registration API and the mount table API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11362) Storage#shouldReturnNextDir should check for null dirType

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959649#comment-15959649
 ] 

Hadoop QA commented on HDFS-11362:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11362 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862340/HDFS-11362.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9d9d709a1540 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a9439e |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18995/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18995/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18995/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Storage#shouldReturnNextDir should check for null dirType
> -
>
> Key: HDFS-11362
> URL: https://issues.apache.org/jira/browse/HDFS-11362
> Project: Hadoop HDFS
>  

[jira] [Commented] (HDFS-10882) Federation State Store Interface API

2017-04-06 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959504#comment-15959504
 ] 

Subru Krishnan commented on HDFS-10882:
---

[~elgoiri]/[~chris.douglas], apologize for jumping in late. I looked at it and 
have a couple of comments on {{RecordStore}}:
  * _version_ attribute should be removed based on our consensus in HDFS-10881.
  * I feel the operations are redundant here as they are already part of the 
*StateStoreDriver*.

> Federation State Store Interface API
> 
>
> Key: HDFS-10882
> URL: https://issues.apache.org/jira/browse/HDFS-10882
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10882-HDFS-10467-001.patch, 
> HDFS-10882-HDFS-10467-002.patch, HDFS-10882-HDFS-10467-003.patch, 
> HDFS-10882-HDFS-10467-004.patch
>
>
> The minimal classes and interfaces required to create state store internal 
> data APIs using protobuf serialization.  This is a pre-requisite for higher 
> level APIs such as the registration API and the mount table API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11618) Block Storage: Add Support for Direct I/O

2017-04-06 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959499#comment-15959499
 ] 

Chen Liang commented on HDFS-11618:
---

Thanks [~msingh] for working on this! +1 for v002 patch. I tested the failed 
unit tests, the related CBlock tests all passed, so should be unrelated.

> Block Storage: Add Support for Direct I/O
> -
>
> Key: HDFS-11618
> URL: https://issues.apache.org/jira/browse/HDFS-11618
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11618-HDFS-7240.001.patch, 
> HDFS-11618-HDFS-7240.002.patch
>
>
> Currently Block Storage write the data to a leveldb Cache and then flushes 
> the data to the containers. This behavior should be configurable and support 
> should be added to write the data directly to the containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11362) Storage#shouldReturnNextDir should check for null dirType

2017-04-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11362:
--
Attachment: HDFS-11362.001.patch

Thank you [~xiaobingo], [~vagarychen] and [~xyao] for the reviews.
I have updated patch v01 to change the default dirType to UNDEFINED.

> Storage#shouldReturnNextDir should check for null dirType
> -
>
> Key: HDFS-11362
> URL: https://issues.apache.org/jira/browse/HDFS-11362
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-11362.000.patch, HDFS-11362.001.patch
>
>
> _Storage#shouldReturnNextDir_ method checks if the next Storage directory is 
> of the same type us dirType.
> {noformat}
> private boolean shouldReturnNextDir() {
>   StorageDirectory sd = getStorageDir(nextIndex);
>   return (dirType == null || sd.getStorageDirType().isOfType(dirType)) &&
>   (includeShared || !sd.isShared());
> }
> {noformat}
> There is a possibility that sd.getStorageDirType() returns null (default 
> dirType is null). Hence, before checking for type match, we should make sure 
> that the value returned by sd.getStorageDirType() is not null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11334) [SPS]: NN switch and rescheduling movements can lead to have more than one coordinator for same file blocks

2017-04-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959441#comment-15959441
 ] 

Rakesh R commented on HDFS-11334:
-

[~umamaheswararao] Many tests are failing due to SPS stop time out. Probably, 
we could resolve HDFS-11338 jira first.

> [SPS]: NN switch and rescheduling movements can lead to have more than one 
> coordinator for same file blocks
> ---
>
> Key: HDFS-11334
> URL: https://issues.apache.org/jira/browse/HDFS-11334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
> Fix For: HDFS-10285
>
> Attachments: HDFS-11334-HDFS-10285-00.patch, 
> HDFS-11334-HDFS-10285-01.patch, HDFS-11334-HDFS-10285-02.patch
>
>
> I am summarizing the scenarios here what Rakesh and me discussed offline:
> Here we need to handle couple of cases:
> # NN switch - it will freshly start scheduling for all files.
>At this time, old co-ordinators may continue movement work and send 
> results back. This could confuse NN SPS that which result is right one.
>   *NEED TO HANDLE*
> # DN disconnected for heartbeat expiry - If DN disconnected for long 
> time(more than heartbeat expiry), NN will remove this nodes. After SPS 
> Monitor time out, it may retry for files which were scheduled to that DN, by 
> finding new co-ordinator. But if it reconnects back after NN reschedules, it 
> may lead to get different results from deferent co-ordinators.
> *NEED TO HANDLE*
> # NN Restart- Should be same as point 1
> # DN disconnect - here When DN disconnected simply and reconnected 
> immediately (before heartbeat expiry), there should not any issues
> *NEED NOT HANDLE*, but can think of more scenarios if any thing missing
> # DN Restart- If DN restarted, DN can not send any results as it will loose 
> everything. After NN SPS Monitor timeout, it will retry.
> *NEED NOT HANDLE*, but can think of more scenarios if any thing missing



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11338) [SPS]: Fix timeout issue in unit tests caused by longger NN down time

2017-04-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959436#comment-15959436
 ] 

Rakesh R commented on HDFS-11338:
-

[~umamaheswararao], [~zhouwei]  Many tests are failing due to time out. This is 
due to the extra joining time out added to gracefully shutdown SPS threads. 

{code}
BlockStorageMovementAttemptedItems.java
  timerThread.interrupt();
  try {
timerThread.join(3000);
  } catch (InterruptedException ie) {
  }
{code}

Below is already mentioned in the jira desc.
{code}
StoragePolicySatisfier.java
storagePolicySatisfierThread.interrupt();
try {
  storagePolicySatisfierThread.join(3000);
} catch (InterruptedException ie) {
}
{code}

Presently, we are enabling SPS feature by default, so that all these test cases 
are affected and spends extra time while stopping the server. IMHO, we could 
disable the SPS for these test cases rather than increasing the overall testing 
time.


> [SPS]: Fix timeout issue in unit tests caused by longger NN down time
> -
>
> Key: HDFS-11338
> URL: https://issues.apache.org/jira/browse/HDFS-11338
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11338-HDFS-10285.00.patch, 
> HDFS-11338-HDFS-10285.01.patch
>
>
> As discussed in HDFS-11186, it takes longer to stop NN:
> {code}
> try {
>   storagePolicySatisfierThread.join(3000);
> } catch (InterruptedException ie) {
> }
> {code}
> So, it takes longer time to finish some tests and this leads to the timeout 
> failures.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11622) TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is lost

2017-04-06 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959433#comment-15959433
 ] 

Karan Mehta commented on HDFS-11622:


I understood the following use case for such a requirement [non-RPC spans and 
mapping to multiple parents| 
https://github.com/opentracing/specification/issues/5].
{quote}
Another example is in HBase. HBase has a write-ahead log, where it does "group 
commit." In other words, if HBase gets requests A, B, and C, it does a single 
write-ahead log write for all of them. The WAL writes can be time-consuming 
since they involve writing to an HDFS stream, which could be slow for any 
number of reasons (network, error handling, GC, etc.).
{quote} 

Since requests A, B and C can be started independently, they will be assigned 
different trace ID as well as span ID. The WAL write will be single for them, 
having a single span for each of them, containing multiple parents pointing to 
each of them. I am unclear about the use of trace ID at this point if all of 
them can be easily traced via their parents span ID. Even in cases where the 
trace doesn't form any DAG and is a linearly growing span, the information can 
still be tracked via parent span ID. 

Although we have multiple parents, they way it should work is that all of them 
relate to the same span ID. Commented code for future use in the Description 
suggests that all the parents will be available at the time of start of 
{{dataStreamer}} span. The {{DFSPacket}} initializes the parents field when it 
is dumping the data in {{dataQueue}} with the line 
{{packet.addTraceParent(Tracer.getCurrentSpanId())}}, thus getting current 
trace from the {{ThreadLocal}}. At this point, I feel that we can also get the 
value of trace ID and add the info inside the {{DFSPacket}}. Any thoughts on 
this one?

> TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is 
> lost
> --
>
> Key: HDFS-11622
> URL: https://issues.apache.org/jira/browse/HDFS-11622
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tracing
>Reporter: Karan Mehta
>
> In the {{run()}} method of {{DataStreamer}} class, the following code is 
> written. {{parents\[0\]}} refer to the {{spanId}} of the parent span.
> {code}
>   one = dataQueue.getFirst(); // regular data packet
>   long parents[] = one.getTraceParents();
>   if (parents.length > 0) {
>  scope = Trace.startSpan("dataStreamer", new TraceInfo(0, 
> parents[0]));
> // TODO: use setParents API once it's available from HTrace 
> 3.2
> // scope = Trace.startSpan("dataStreamer", Sampler.ALWAYS);
> // scope.getSpan().setParents(parents);
>   }
> {code}
> The {{scope}} starts a new TraceSpan with a traceId hardcoded to 0. Ideally 
> it should be taken when {{currentPacket.addTraceParent(Trace.currentSpan())}} 
> is invoked. This JIRA is to propose an additional long field inside the 
> {{DFSPacket}} class which holds the parent {{traceId}}. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-04-06 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959017#comment-15959017
 ] 

Tsz Wo Nicholas Sze commented on HDFS-11558:


> After that, nameserviceId is never null. So that we can remove the null check 
> in formatThreadName.

Sorry that I may not be clear -- I mean pass a non-null value (say "ns") in 
TestBPOfferService so that nameserviceId is never null.

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, 
> HDFS-11558.005.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7343) HDFS smart storage management

2017-04-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958923#comment-15958923
 ] 

Rakesh R commented on HDFS-7343:


Thanks [~zhouwei] for more details about the data points.
bq. Create a table to store the info and insert the table name into table 
access_count_table.
It looks like lot of tables will be created to capture time period details, 
sec_1...sec_n, min_1...min_n, hour_1...hour_n, day_1day_n, 
month_1...month_12 etc. I hope these tables will be deleted after performing 
the aggregation functions. Again, it may exhaust DB by growing the number of 
tables if the aggregation time is longer, right?. Just a plain thought to 
minimize the number of time spec tables, how about capturing {{access_time}} as 
a column field and update {{access_time}} of respective {{fid}}? I think, using 
the {{access_time}} attribute, we would be able to filter out specific 
{{fid_access_count}} between a certain {{start_time}} and {{end_time}}.

Table {{seconds_level}} => composite key {{acess_time}} and {{fid}} to uniquely 
identify each row in the table.
||acess_time||fid||count||
|sec-2017-03-31-12-59-45|3|1|
|sec-2017-03-31-12-59-45|2|1|

Again, for faster aggregation function probably we could maintain separate 
{{tables per units of time}} like below. After the aggregate function, we could 
delete those rows used for aggregation.

(1) seconds_level
(2) minutes_level
(3) hours_level
(4) days_level
(5) weeks_level
(6) months_level
(7) years_level

> HDFS smart storage management
> -
>
> Key: HDFS-7343
> URL: https://issues.apache.org/jira/browse/HDFS-7343
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: access_count_tables.jpg, 
> HDFSSmartStorageManagement-General-20170315.pdf, 
> HDFS-Smart-Storage-Management.pdf, 
> HDFSSmartStorageManagement-Phase1-20170315.pdf, 
> HDFS-Smart-Storage-Management-update.pdf, move.jpg, tables_in_ssm.xlsx
>
>
> As discussed in HDFS-7285, it would be better to have a comprehensive and 
> flexible storage policy engine considering file attributes, metadata, data 
> temperature, storage type, EC codec, available hardware capabilities, 
> user/application preference and etc.
> Modified the title for re-purpose.
> We'd extend this effort some bit and aim to work on a comprehensive solution 
> to provide smart storage management service in order for convenient, 
> intelligent and effective utilizing of erasure coding or replicas, HDFS cache 
> facility, HSM offering, and all kinds of tools (balancer, mover, disk 
> balancer and so on) in a large cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11569) Ozone: Implement listKey function for KeyManager

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958763#comment-15958763
 ] 

Hadoop QA commented on HDFS-11569:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.ozone.scm.node.TestContainerPlacement |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11569 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862264/HDFS-11569-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4410024ac07e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 7ce1090 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18994/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18994/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18994/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18994/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Implement 

[jira] [Commented] (HDFS-11569) Ozone: Implement listKey function for KeyManager

2017-04-06 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958664#comment-15958664
 ] 

Weiwei Yang commented on HDFS-11569:


v6 patch is based on the 
[comment|https://issues.apache.org/jira/browse/HDFS-11569?focusedCommentId=15958233=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15958233]
 I gave .  [~anu] feel free to comment, I just could not wait another day to 
get your confirmation, but discussion is still open :).

> Ozone: Implement listKey function for KeyManager
> 
>
> Key: HDFS-11569
> URL: https://issues.apache.org/jira/browse/HDFS-11569
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11569-HDFS-7240.001.patch, 
> HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, 
> HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch, 
> HDFS-11569-HDFS-7240.006.patch
>
>
> List keys by prefix from a container. This will need to support pagination 
> for the purpose of small object support. So the listKey function returns 
> something like ListKeyResult, client can iterate the object to get pagination 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11569) Ozone: Implement listKey function for KeyManager

2017-04-06 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11569:
---
Attachment: HDFS-11569-HDFS-7240.006.patch

> Ozone: Implement listKey function for KeyManager
> 
>
> Key: HDFS-11569
> URL: https://issues.apache.org/jira/browse/HDFS-11569
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11569-HDFS-7240.001.patch, 
> HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, 
> HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch, 
> HDFS-11569-HDFS-7240.006.patch
>
>
> List keys by prefix from a container. This will need to support pagination 
> for the purpose of small object support. So the listKey function returns 
> something like ListKeyResult, client can iterate the object to get pagination 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11569) Ozone: Implement listKey function for KeyManager

2017-04-06 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958657#comment-15958657
 ] 

Weiwei Yang commented on HDFS-11569:


Further I am thinking how we handle the {{maxNumOfKeys}}, what if number of 
available keys exceeds this count? E.g if user request for all keys with prefix 
"somekey", by looking up KSM, there is 1500 keys have this prefix, then we 
return client 1000 keys and set {{truncated}} to true in {{ListKeys}}? How 
client retrieves the rest 500 keys?

It looks like we still want pagination in client side, lets say make 
{{ListKeys}} class enumerable, just like s3 API 
[http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingObjectKeysUsingJava.html].
 That can be done in a separate jira when we work on the implementation of 
{{DistributedStorageHandler#listKeys}}. What do you think?

> Ozone: Implement listKey function for KeyManager
> 
>
> Key: HDFS-11569
> URL: https://issues.apache.org/jira/browse/HDFS-11569
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11569-HDFS-7240.001.patch, 
> HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, 
> HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch
>
>
> List keys by prefix from a container. This will need to support pagination 
> for the purpose of small object support. So the listKey function returns 
> something like ListKeyResult, client can iterate the object to get pagination 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11569) Ozone: Implement listKey function for KeyManager

2017-04-06 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958233#comment-15958233
 ] 

Weiwei Yang edited comment on HDFS-11569 at 4/6/17 9:18 AM:


Hi [~anu]

Thanks for your comments, apparently I was missing the handler part, thanks for 
pointing this out. Like you mentioned, I don't think this jira will implement 
{{DistributedStorageHandler#listKeys}}, because the list key operation is 
supposed to route to KSM first, then get container location from SCM. However 
KSM is not ready yet,  so lets use this jira to track the work of the container 
layer.

About the pagination, you are making a good point. It looks better to simply 
honor  the arguments {{prefix}}, {{prevKey}} and {{maxNumOfKeys}}.  The default 
value for max number of keys is 1000 to avoid returning too many entries a 
time. Works like following

# Client makes a listKey request to ozone front end
# Ozone handler, i.e {{DistributedStorageHandler}}, handles the request, it 
gets the arguments from {{ListArgs}}
# Ozone handler looks up container locations from {{KSM}} where a range of keys 
reside.
# Ozone handler reads keys from those containers via {{KeyManager}} interface.
# Ozone handler merges results from multiple containers and return that to 
client.

This jira only addresses #4, rest of will need to be implemented when KSM is 
ready. Please let me know if this approach looks good to you or not. Thank you.


was (Author: cheersyang):
Hi [~anu]

Thanks for your comments, apparently I missed the web handler part. I will 
implement this. And I noticed the deleteKey function has been implemented in 
{{KeyManagerImpl}}, but not yet implemented in {{DistributedStorageHandler}}, 
we need to get that done too (maybe in another jira), right?

About the pagination, you are making a good point. It looks better to simply 
honor  the arguments {{prefix}}, {{prevKey}} and {{maxKeys}}, send them to 
container layer and return desired set of keys. That means we do not need 
pagination in server side, instead we let client side to request proper size of 
results. And we set {{maxKeys}} a default value 1000. Please let me know if 
this approach looks good to you or not. Thank you.

> Ozone: Implement listKey function for KeyManager
> 
>
> Key: HDFS-11569
> URL: https://issues.apache.org/jira/browse/HDFS-11569
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11569-HDFS-7240.001.patch, 
> HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, 
> HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch
>
>
> List keys by prefix from a container. This will need to support pagination 
> for the purpose of small object support. So the listKey function returns 
> something like ListKeyResult, client can iterate the object to get pagination 
> results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958491#comment-15958491
 ] 

Hadoop QA commented on HDFS-11530:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy 
|
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11530 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12862237/HDFS-11530.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f24b1932a8bf 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a9439e |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18993/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18993/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18993/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-10848) Move hadoop-hdfs-native-client module into hadoop-hdfs-client

2017-04-06 Thread Huafeng Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958362#comment-15958362
 ] 

Huafeng Wang commented on HDFS-10848:
-

Hi Kai, it's hard to say. I think it will be a big move and impact a lot. 
Generally it should contain some protocols and utilities that are both used by 
client and sever.

> Move hadoop-hdfs-native-client module into hadoop-hdfs-client
> -
>
> Key: HDFS-10848
> URL: https://issues.apache.org/jira/browse/HDFS-10848
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Akira Ajisaka
>Assignee: Huafeng Wang
> Attachments: HDFS-10848.001.patch
>
>
> When a patch changes hadoop-hdfs-client module, Jenkins does not pick up the 
> tests in the native code. That way we overlooked test failure when committing 
> the patch. (ex. HDFS-10844)
> [~aw] said in HDFS-10844,
> bq. Ideally, all of this native code would be hdfs-client. Then when a change 
> is made to to that code, this code will also get tested.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-04-06 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958352#comment-15958352
 ] 

Yiqun Lin edited comment on HDFS-11530 at 4/6/17 6:13 AM:
--

Thanks for your analysis, [~vagarychen]!
I think there is no need to use {{DFSNetworkTopology}} in {{Dispatcher}}. As I 
look into the code, the {{NetworkTopology}} that used in class {{Dispatcher}} 
is completely independent. It will add the nodes into topology in the 
{{Dispatcher#init}} method. In addition, the method 
{{NetworkTopology#isNodeGroupAware()}} will also be invoked in {{Dispatcher}}.
Attach the new patch to address this and fix checkstyle warning.


was (Author: linyiqun):
Thanks for your analysis, [~vagarychen]!
I think there is no need to use {{DFSNetworkTopology}} in {{Dispatcher}}. As I 
look into the code, the {{NetworkTopology}} that used in class {{Dispatcher}} 
is completely independent. It will add the nodes into topology in the 
{{.Dispatcher#init}} method. In addition, the method 
{{NetworkTopology#isNodeGroupAware()}} will also be invoked in {{Dispatcher}}.
Attach the new patch to address this and fix checkstyle warning.

> Use HDFS specific network topology to choose datanode in 
> BlockPlacementPolicyDefault
> 
>
> Key: HDFS-11530
> URL: https://issues.apache.org/jira/browse/HDFS-11530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, 
> HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch
>
>
> The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. 
> But this method is contained in new topology {{DFSNetworkTopology}} which is 
> specified for HDFS. We should update this and let 
> {{BlockPlacementPolicyDefault}} use the new way since the original way is 
> inefficient.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-04-06 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958353#comment-15958353
 ] 

Thomas Demoor commented on HDFS-9806:
-

We will post an updated design doc next week.

Quick status update: 
* General infrastructure, protocol changes and read path are almost done
* Write path and dynamic mounting are ongoing

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault

2017-04-06 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11530:
-
Attachment: HDFS-11530.005.patch

Thanks for your analysis, [~vagarychen]!
I think there is no need to use {{DFSNetworkTopology}} in {{Dispatcher}}. As I 
look into the code, the {{NetworkTopology}} that used in class {{Dispatcher}} 
is completely independent. It will add the nodes into topology in the 
{{.Dispatcher#init}} method. In addition, the method 
{{NetworkTopology#isNodeGroupAware()}} will also be invoked in {{Dispatcher}}.
Attach the new patch to address this and fix checkstyle warning.

> Use HDFS specific network topology to choose datanode in 
> BlockPlacementPolicyDefault
> 
>
> Key: HDFS-11530
> URL: https://issues.apache.org/jira/browse/HDFS-11530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, 
> HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch
>
>
> The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. 
> But this method is contained in new topology {{DFSNetworkTopology}} which is 
> specified for HDFS. We should update this and let 
> {{BlockPlacementPolicyDefault}} use the new way since the original way is 
> inefficient.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org