[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431750#comment-16431750
 ] 

Íñigo Goiri commented on HDFS-13386:


Thanks [~dibyendu_hadoop] for the reply.
You are right, comparing the time with the NN report is the right thing.
Go ahead with the other changes.

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, 
> HDFS-13386-004.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, 
> image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology

2018-04-09 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431728#comment-16431728
 ] 

Tao Jie commented on HDFS-13418:


[~linyiqun] Thank you for your comment. I feel a little tricky that 
{{net.topology.impl}} is configured but does not work when 
{{dfs.use.dfs.network.topology}} is true.
 1, Can we just remove 
{{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in 
core-default.xml? Since we have default impl in code.
 2, Can we set {{net.topology.impl=org.apache.hadoop.net.DFSNetworkTopology}} 
in hdfs-default.xml? So it would cover the value set in core-site.default when 
we use HDFS.
 3, If we want to add a new NetworkTopology impl and work with 
{{DFSNetworkTopology}}, we may define the new NetworkTopology extends 
{{DFSNetworkTopology}}. But it does not work since {{DFSNetworkTopology}} is 
hardcoded here once {{dfs.use.dfs.network.topology}} is true. Can we use 
reflection to instantiate the NetworkTopology impl when 
{{dfs.use.dfs.network.topology}} is set true?

>  NetworkTopology should be configurable when enable DFSNetworkTopology
> --
>
> Key: HDFS-13418
> URL: https://issues.apache.org/jira/browse/HDFS-13418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
>
> In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set 
> DFSNetworkTopology as the default implementation.
> We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in 
> core-site.default. Actually this property does not effect once 
> {{dfs.use.dfs.network.topology}} is true. 
> in {{DatanodeManager}},networkTopology is initialized as 
> {code}
> if (useDfsNetworkTopology) {
>   networktopology = DFSNetworkTopology.getInstance(conf);
> } else {
>   networktopology = NetworkTopology.getInstance(conf);
> }
> {code}
> I think we should still make the NetworkTopology  configurable rather than 
> hard code the implementation since we may need another NetworkTopology impl.
> I am not sure if there is other consideration. Any thought? [~vagarychen] 
> [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-09 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431725#comment-16431725
 ] 

Bharat Viswanadham commented on HDFS-13329:
---

Attached patch v04 to fix compilation issue.

> Add/ Update disk space counters for trash (trash used, disk remaining etc.) 
> 
>
> Key: HDFS-13329
> URL: https://issues.apache.org/jira/browse/HDFS-13329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13329-HDFS-12996.01.patch, 
> HDFS-13329-HDFS-12996.02.patch, HDFS-13329-HDFS-12996.03.patch, 
> HDFS-13329-HDFS-12996.04.patch
>
>
> Add 3 more counters required for datanode replica trash.
>  # diskAvailable
>  # replicaTrashUsed
>  # replicaTrashRemaining
> For more info on these counters, refer design document uploaded in HDFS-12996



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13329:
--
Attachment: HDFS-13329-HDFS-12996.04.patch

> Add/ Update disk space counters for trash (trash used, disk remaining etc.) 
> 
>
> Key: HDFS-13329
> URL: https://issues.apache.org/jira/browse/HDFS-13329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13329-HDFS-12996.01.patch, 
> HDFS-13329-HDFS-12996.02.patch, HDFS-13329-HDFS-12996.03.patch, 
> HDFS-13329-HDFS-12996.04.patch
>
>
> Add 3 more counters required for datanode replica trash.
>  # diskAvailable
>  # replicaTrashUsed
>  # replicaTrashRemaining
> For more info on these counters, refer design document uploaded in HDFS-12996



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431686#comment-16431686
 ] 

genericqa commented on HDFS-13243:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
53s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 53s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} hadoop-hdfs-project: The patch generated 114 new 
+ 853 unchanged - 2 fixed = 967 total (was 855) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
27s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 9 new + 1 
unchanged - 0 fixed = 10 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13243 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918302/HDFS-13243-v5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7d90301283d0 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0006346 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | 

[jira] [Commented] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431685#comment-16431685
 ] 

genericqa commented on HDFS-13329:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12996 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 5s{color} | {color:green} HDFS-12996 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
49s{color} | {color:green} HDFS-12996 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
52s{color} | {color:green} HDFS-12996 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
49s{color} | {color:green} HDFS-12996 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
39s{color} | {color:green} HDFS-12996 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} HDFS-12996 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
16s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m 16s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 16s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 11s{color} | {color:orange} root: The patch generated 6 new + 930 unchanged 
- 3 fixed = 936 total (was 933) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
15s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 12s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431682#comment-16431682
 ] 

genericqa commented on HDFS-7101:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-7101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12783961/HDFS-7101.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 46574dce52f3 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0006346 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23850/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-04-09 Thread Zephyr Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431670#comment-16431670
 ] 

Zephyr Guo commented on HDFS-13243:
---

I have rebased. [~jojochuang]

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 3 >= minimum = 2) in 
> file 
> 

[jira] [Commented] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology

2018-04-09 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431659#comment-16431659
 ] 

Yiqun Lin commented on HDFS-13418:
--

Hi [~Tao Jie], {{DFSNetworkTopology}} is specified for HDFS and we don't put 
this into COMMON, But in order to compatible  current NetworkTopology impls, we 
introduce the new setting to control this. If the users want to use the other 
NetworkTopology impls, we can just set {{dfs.use.dfs.network.topology}} to 
false.

>  NetworkTopology should be configurable when enable DFSNetworkTopology
> --
>
> Key: HDFS-13418
> URL: https://issues.apache.org/jira/browse/HDFS-13418
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
>
> In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set 
> DFSNetworkTopology as the default implementation.
> We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in 
> core-site.default. Actually this property does not effect once 
> {{dfs.use.dfs.network.topology}} is true. 
> in {{DatanodeManager}},networkTopology is initialized as 
> {code}
> if (useDfsNetworkTopology) {
>   networktopology = DFSNetworkTopology.getInstance(conf);
> } else {
>   networktopology = NetworkTopology.getInstance(conf);
> }
> {code}
> I think we should still make the NetworkTopology  configurable rather than 
> hard code the implementation since we may need another NetworkTopology impl.
> I am not sure if there is other consideration. Any thought? [~vagarychen] 
> [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13419) client can communicate to server even if hdfs delegation token expired

2018-04-09 Thread wangqiang.shen (JIRA)
wangqiang.shen created HDFS-13419:
-

 Summary: client can communicate to server even if hdfs delegation 
token expired
 Key: HDFS-13419
 URL: https://issues.apache.org/jira/browse/HDFS-13419
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: wangqiang.shen


i was testing hdfs delegation token expired problem use spark streaming, if i 
set my batch interval small than 10 sec, my spark streaming program will not 
dead, but if batch interval was setted bigger than 10 sec, the spark streaming 
program will dead because of hdfs delegation token expire problem, and the 
exception as follows

{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token 14042 for test) is expired
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy11.getListing(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:554)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy12.getListing(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1969)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1952)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:693)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
at com.envisioncn.arch.App$2$1.call(App.java:120)
at com.envisioncn.arch.App$2$1.call(App.java:91)
at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)
at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:902)
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

the spark streaming program only call FileSystem.listStatus function in every 
batch

{code:java}
FileSystem fs = FileSystem.get(new Configuration());
FileStatus[] status =  fs.listStatus(new Path("/"));

for(FileStatus status1 : status){
System.out.println(status1.getPath());
}
{code}

and i found when hadoop client send rpc request to server, it will first get a 
connection object and set up the connection if the connection dose not 
exists.And  it will get a SaslRpcClient to connect to the server side in the 
connection setup stage.Also server will authenticate the client at the 
connection setup stage. But if the connection exists, client will use the 
existed connection, so the authentication stage will not happen. 

The connection between client and server will be closed if it's idle time 
exceeds ipc.client.connection.maxidletime, and 
ipc.client.connection.maxidletime default value is 10sec. Therefore, if i 
continue send request to server at fixed interval as long as the interval small 
than 10sec, the connection will not be closed, so delegation token expire 
problem will not happen.



 



--
This message was 

[jira] [Updated] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-04-09 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated HDFS-13243:
--
Attachment: HDFS-13243-v5.patch

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.219:50010 by 
> hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1085498930 added as corrupt on 
> 10.0.0.218:50010 by 
> hb-j5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is 
> COMMITTED and reported length 2054413 does not match length in block map 
> 141232
> 2018-03-05 04:05:40,162 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 3 >= minimum = 2) in 
> file 
> 

[jira] [Created] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology

2018-04-09 Thread Tao Jie (JIRA)
Tao Jie created HDFS-13418:
--

 Summary:  NetworkTopology should be configurable when enable 
DFSNetworkTopology
 Key: HDFS-13418
 URL: https://issues.apache.org/jira/browse/HDFS-13418
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.1
Reporter: Tao Jie
Assignee: Tao Jie


In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set 
DFSNetworkTopology as the default implementation.

We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in 
core-site.default. Actually this property does not effect once 
{{dfs.use.dfs.network.topology}} is true. 
in {{DatanodeManager}},networkTopology is initialized as 
{code}
if (useDfsNetworkTopology) {
  networktopology = DFSNetworkTopology.getInstance(conf);
} else {
  networktopology = NetworkTopology.getInstance(conf);
}
{code}
I think we should still make the NetworkTopology  configurable rather than hard 
code the implementation since we may need another NetworkTopology impl.
I am not sure if there is other consideration. Any thought? [~vagarychen] 
[~linyiqun]




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-09 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431619#comment-16431619
 ] 

Bharat Viswanadham commented on HDFS-13329:
---

Thank You [~hanishakoneru] for review.

Addressed the review comments.

Uploaded patch v03 to address review comments.

 
 # For exclude, I have used --exclude, but as that option is not available for 
Mac, in this case used du without exclude option. I am thinking this is okay, 
as mac machines are just for testing purpose we do. Let me know if you want to 
address this in any other approach.

Remaining all comments I have addressed as suggested.

 

> Add/ Update disk space counters for trash (trash used, disk remaining etc.) 
> 
>
> Key: HDFS-13329
> URL: https://issues.apache.org/jira/browse/HDFS-13329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13329-HDFS-12996.01.patch, 
> HDFS-13329-HDFS-12996.02.patch, HDFS-13329-HDFS-12996.03.patch
>
>
> Add 3 more counters required for datanode replica trash.
>  # diskAvailable
>  # replicaTrashUsed
>  # replicaTrashRemaining
> For more info on these counters, refer design document uploaded in HDFS-12996



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13329) Add/ Update disk space counters for trash (trash used, disk remaining etc.)

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13329:
--
Attachment: HDFS-13329-HDFS-12996.03.patch

> Add/ Update disk space counters for trash (trash used, disk remaining etc.) 
> 
>
> Key: HDFS-13329
> URL: https://issues.apache.org/jira/browse/HDFS-13329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13329-HDFS-12996.01.patch, 
> HDFS-13329-HDFS-12996.02.patch, HDFS-13329-HDFS-12996.03.patch
>
>
> Add 3 more counters required for datanode replica trash.
>  # diskAvailable
>  # replicaTrashUsed
>  # replicaTrashRemaining
> For more info on these counters, refer design document uploaded in HDFS-12996



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-09 Thread Dibyendu Karmakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431608#comment-16431608
 ] 

Dibyendu Karmakar commented on HDFS-13386:
--

Thanks [~elgoiri] for your review comments. I have handled the style related 
comments.
{quote}I'm not sure is needed to do a listStatus and a getPartialListing, 
shouldn't we be able to check the dates directly?
{quote}
Here listStatus is done to get the list from the NN. Then we are doing

{code:java}
DirectoryListing listing =
routerProtocol.getListing("/", HdfsFileStatus.EMPTY_NAME, false);
{code}
to get the listing from router and after that we are verifying the time 
returned by NN and router.

{quote}We could also check that the number of entries is the expected.
{quote}
 This I will do.

{quote}Right now it looks a little bit complicated for what we want to check 
which is basically that the time of the new files/folders/mount table entries 
is bigger than the initial time.
{quote}
We want to check that the time of the new files/folders/mount table entries is 
bigger than the initial time and the time is same as mount table entry(for the 
mount points) and the time returned by NN(for files/folders returned by NN). 

Please suggest whether we are good with 
{code:java}
  assertTrue(currentTime > beforeCreatingTime);
  assertEquals(currentTime, expectedTime);
{code}
 these checks or {quote} new files/folders/mount table entries is bigger than 
the initial time. {quote} this is sufficient.


 

 

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, 
> HDFS-13386-004.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, 
> image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13402) RBF: Fix java doc for StateStoreFileSystemImpl

2018-04-09 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13402:
-
Fix Version/s: (was: 3.0.3)
   3.0.4

> RBF: Fix  java doc for StateStoreFileSystemImpl
> ---
>
> Key: HDFS-13402
> URL: https://issues.apache.org/jira/browse/HDFS-13402
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Yiran Wu
>Assignee: Yiran Wu
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13402.001.patch, HDFS-13402.002.patch, 
> HDFS-13402.003.patch
>
>
> {code:java}
> /**
>  *StateStoreDriver}implementation based on a filesystem. The most common uses
>  * HDFS as a backend.
>  */
> {code}
> to
> {code:java}
> /**
>  * {@link StateStoreDriver} implementation based on a filesystem. The common
>  * implementation uses HDFS as a backend. The path can be specified setting
>  * dfs.federation.router.driver.fs.path=hdfs://host:port/path/to/store.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit

2018-04-09 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431605#comment-16431605
 ] 

Yiqun Lin commented on HDFS-13380:
--

Thanks [~elgoiri] and [~wuweiwei], :).

> RBF: mv/rm fail after the directory exceeded the quota limit
> 
>
> Key: HDFS-13380
> URL: https://issues.apache.org/jira/browse/HDFS-13380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.4
>
> Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch
>
>
> It's always fail when I try to mv/rm a directory which have exceeded the 
> quota limit.
> {code:java}
> [hadp@hadoop]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-]
> [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: 
> The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> [hadp@hadoop]$ hdfs dfs -rm -skipTrash 
> hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> {code}
> I think we should add a parameter for the method *getLocationsForPath,* to 
> determine if we need to perform quota verification on the operation. For 
> example mv src directory parameter and rm directory parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-09 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13386:
-
Comment: was deleted

(was: Thanks [~elgoiri] for your review comments. I have handled the style 
related comments.
{quote}I'm not sure is needed to do a listStatus and a getPartialListing, 
shouldn't we be able to check the dates directly?
{quote}
Here listStatus is done to get the list from the NN and then we are doing

 
{code:java}
DirectoryListing listing =
routerProtocol.getListing("/", HdfsFileStatus.EMPTY_NAME, false);
{code}
to get the listing from router. After that we are verifying the time returned 
by NN and router.

{quote}We could also check that the number of entries is the expected.
{quote}
 

 

 )

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, 
> HDFS-13386-004.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, 
> image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-09 Thread Dibyendu Karmakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431598#comment-16431598
 ] 

Dibyendu Karmakar commented on HDFS-13386:
--

Thanks [~elgoiri] for your review comments. I have handled the style related 
comments.
{quote}I'm not sure is needed to do a listStatus and a getPartialListing, 
shouldn't we be able to check the dates directly?
{quote}
Here listStatus is done to get the list from the NN and then we are doing

 
{code:java}
DirectoryListing listing =
routerProtocol.getListing("/", HdfsFileStatus.EMPTY_NAME, false);
{code}
to get the listing from router. After that we are verifying the time returned 
by NN and router.

{quote}We could also check that the number of entries is the expected.
{quote}
 

 

 

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, 
> HDFS-13386-004.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, 
> image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-7101) Potential null dereference in DFSck#doWork()

2018-04-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368441#comment-16368441
 ] 

Ted Yu edited comment on HDFS-7101 at 4/10/18 1:03 AM:
---

More review, please .


was (Author: yuzhih...@gmail.com):
More review, please.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431495#comment-16431495
 ] 

genericqa commented on HDFS-13363:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918246/HDFS-13363.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 58225dd20ff9 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| 

[jira] [Commented] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431470#comment-16431470
 ] 

Xiao Chen commented on HDFS-13363:
--

Thanks Gabor for revving! +1 pending jenkins.

Please fill in the 'target versions' field in the future, as where you want the 
fix to be. I have filled in 3.2.0 for trunk here, feel free to add if you want 
this in lower branches.

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13401) Different behavior in mkdir when the target folder of mount table is uncreated

2018-04-09 Thread Jianfei Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430282#comment-16430282
 ] 

Jianfei Jiang edited comment on HDFS-13401 at 4/9/18 11:00 PM:
---

No exception or other hints show that the hdfs folder which mounted to viewfs 
MUST exist and we cannot created by mkdir command. Is it necessary to make the 
mount viewfs folder be creatable? Or throw a exception when mkdir command under 
a viewfs path which is actually not exist?

 


was (Author: jiangjianfei):
No exception or other hints show that the hdfs folder which mounted to viewfs 
MUST exist and we cannot created by mkdir command. Is it necessary to make the 
mount viewfs folder be creatable? Or throw a exception when mkdir command under 
a viewfs path which is actually not exist?

[~brahmareddy] [~sanjay.radia]

> Different behavior in mkdir when the target folder of mount table is uncreated
> --
>
> Key: HDFS-13401
> URL: https://issues.apache.org/jira/browse/HDFS-13401
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.0.1
>Reporter: Jianfei Jiang
>Priority: Major
> Attachments: HDFS-13401.000.patch
>
>
> In federation cases, if we have a config like below:
> 
>  fs.viewfs.mounttable.ClusterX.link./foo
>  hdfs://nn2-clusterx.example.com:8020/footarget
>  
> When the folder hdfs://nn2-clusterx.example.com:8020/projects/footarget is 
> not actually exist, the command {{hdfs dfs ls /}} can still see the folder, 
> but the folder is not actually exist. then we try to use mkdir command to 
> create the viewfs folder {{hdfs dfs mkdir /foo}}, the return code is true, 
> but the folder is not created. However, when we use {{hdfs dfs mkdir -p 
> /foo/xxx}} which have a deeper location to create folder, the folder is 
> successfully created. The results in these two cases may be ambigiuous.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13363:
-
Target Version/s: 3.2.0

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13416:
--
Attachment: HDFS-13416-HDFS-7240.00.patch

> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13416-HDFS-7240.00.patch
>
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300 
> cc [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13416:
--
Status: Patch Available  (was: In Progress)

> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13416-HDFS-7240.00.patch
>
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300 
> cc [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431462#comment-16431462
 ] 

Bharat Viswanadham commented on HDFS-13416:
---

[~nandakumar131] Attached patch v00 to fix test issues.

Could you help in reviewing this patch.

> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13416-HDFS-7240.00.patch
>
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300 
> cc [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431453#comment-16431453
 ] 

genericqa commented on HDFS-13045:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
25s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13045 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918262/HDFS-13045.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 719a2a014bc1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9059376 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23848/testReport/ |
| Max. process+thread count | 1035 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23848/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: 

[jira] [Work started] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13416 started by Bharat Viswanadham.
-
> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300 
> cc [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13417) hdfsGetHedgedReadMetrics() crashes when 'fs' is a non-HDFS filesystem

2018-04-09 Thread Sailesh Mukil (JIRA)
Sailesh Mukil created HDFS-13417:


 Summary: hdfsGetHedgedReadMetrics() crashes when 'fs' is a 
non-HDFS filesystem
 Key: HDFS-13417
 URL: https://issues.apache.org/jira/browse/HDFS-13417
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.0.0-alpha4
Reporter: Sailesh Mukil


{code:java}
(gdb) bt
#0  0x003346c32625 in raise () from /lib64/libc.so.6
#1  0x003346c33e05 in abort () from /lib64/libc.so.6
#2  0x7f185be140b5 in os::abort(bool) ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#3  0x7f185bfb6443 in VMError::report_and_die() ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#4  0x7f185be195bf in JVM_handle_linux_signal ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#5  0x7f185be0fb03 in signalHandler(int, siginfo*, void*) ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#6  
#7  0x7f185bbc1a7b in jni_invoke_nonstatic(JNIEnv_*, JavaValue*, _jobject*, 
JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#8  0x7f185bbc7e81 in jni_CallObjectMethodV ()
   from /usr/java/jdk1.8.0_121-cloudera/jre/lib/amd64/server/libjvm.so
#9  0x0212e2b7 in invokeMethod ()
#10 0x02131297 in hdfsGetHedgedReadMetrics ()
...
...
{code}

hdfsGetHedgedReadMetrics() is not supported for non-HDFS filesystems, so we 
need to fix this to make sure that it doesn't crash.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-04-09 Thread Mitchell Tracy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431366#comment-16431366
 ] 

Mitchell Tracy commented on HDFS-13403:
---

+1 on HDFS-13403.000.patch

> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-13403.000.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-13416:
-

Assignee: Bharat Viswanadham

> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
> at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
> at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
> at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300 
> cc [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431346#comment-16431346
 ] 

genericqa commented on HDFS-13399:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
42s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
47s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
21s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 51s{color} | {color:orange} root: The patch generated 5 new + 456 unchanged 
- 0 fixed = 461 total (was 456) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}248m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestAsyncIPC |
|   | hadoop.ipc.TestIPC |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13399 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431311#comment-16431311
 ] 

Íñigo Goiri commented on HDFS-13045:


Thanks [~ywskycn], I added the stack trace too.
Regarding the corner case, I added a unit test covering it.
Not particularly happy about adding it in the middle of that unit test but not 
sure what better place.
Let me know if you have comments on  [^HDFS-13045.003.patch].

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13045) RBF: Improve error message returned from subcluster

2018-04-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13045:
---
Attachment: HDFS-13045.003.patch

> RBF: Improve error message returned from subcluster
> ---
>
> Key: HDFS-13045
> URL: https://issues.apache.org/jira/browse/HDFS-13045
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, 
> HDFS-13045.002.patch, HDFS-13045.003.patch
>
>
> Currently, Router directly returns exception response from subcluster to 
> client, which may not have the correct error message, especially when the 
> error message containing a path.
> One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If 
> user1 does a chown operation on "/a/b", and he doesn't have corresponding 
> privilege, currently the error msg looks like "Permission denied. user=user1 
> is not the owner of inode=/c/d", which may confuse user. Would be better to 
> reverse the path back to original mount path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-09 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431295#comment-16431295
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

I also have an out-of-scope question for you; [~xkrogen] and [~shv]:

Should we consider creating client and server side configuration for enabling / 
disabling {{AlignmentContext}} processing?

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13354) Add config for min percentage of data nodes to come out of chill mode in SCM

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431282#comment-16431282
 ] 

genericqa commented on HDFS-13354:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 13s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 69 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HDFS-13354 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918239/HDFS-13354-HDFS-7240.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux fe6749e705cc 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDFS-12861) Track speed in DFSClient

2018-04-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431189#comment-16431189
 ] 

Íñigo Goiri commented on HDFS-12861:


[~mf_borge], it looks like the patch has two patches together adding and 
removing some components.
Can you post a diff to trunk?

> Track speed in DFSClient
> 
>
> Key: HDFS-12861
> URL: https://issues.apache.org/jira/browse/HDFS-12861
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: María Fernanda Borge
>Priority: Major
> Attachments: HDFS-12861-9-april-18.patch
>
>
> Sometimes we get slow jobs because of the access to HDFS. However, is hard to 
> tell what is the actual speed. We propose to add a log line with something 
> like:
> {code}
> 2017-11-19 09:55:26,309 INFO [main] hdfs.DFSClient: blk_1107222019_38144502 
> READ 129500B in 7ms 17.6MB/s
> 2017-11-27 19:01:04,141 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792057_86833357 WRITE 131072B in 10ms 12.5MB/s
> 2017-11-27 19:01:14,219 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792069_86833369 WRITE 131072B in 12ms 10.4MB/s
> 2017-11-27 19:01:24,282 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792081_86833381 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:34,330 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792093_86833393 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:44,408 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792105_86833405 WRITE 131072B in 11ms 11.4MB/s
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12861) Track speed in DFSClient

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431184#comment-16431184
 ] 

genericqa commented on HDFS-12861:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12861 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12861 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918249/HDFS-12861-9-april-18.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23847/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Track speed in DFSClient
> 
>
> Key: HDFS-12861
> URL: https://issues.apache.org/jira/browse/HDFS-12861
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: María Fernanda Borge
>Priority: Major
> Attachments: HDFS-12861-9-april-18.patch
>
>
> Sometimes we get slow jobs because of the access to HDFS. However, is hard to 
> tell what is the actual speed. We propose to add a log line with something 
> like:
> {code}
> 2017-11-19 09:55:26,309 INFO [main] hdfs.DFSClient: blk_1107222019_38144502 
> READ 129500B in 7ms 17.6MB/s
> 2017-11-27 19:01:04,141 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792057_86833357 WRITE 131072B in 10ms 12.5MB/s
> 2017-11-27 19:01:14,219 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792069_86833369 WRITE 131072B in 12ms 10.4MB/s
> 2017-11-27 19:01:24,282 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792081_86833381 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:34,330 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792093_86833393 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:44,408 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792105_86833405 WRITE 131072B in 11ms 11.4MB/s
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12861) Track speed in DFSClient

2018-04-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

María Fernanda Borge updated HDFS-12861:

Attachment: HDFS-12861-9-april-18.patch

> Track speed in DFSClient
> 
>
> Key: HDFS-12861
> URL: https://issues.apache.org/jira/browse/HDFS-12861
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: María Fernanda Borge
>Priority: Major
> Attachments: HDFS-12861-9-april-18.patch
>
>
> Sometimes we get slow jobs because of the access to HDFS. However, is hard to 
> tell what is the actual speed. We propose to add a log line with something 
> like:
> {code}
> 2017-11-19 09:55:26,309 INFO [main] hdfs.DFSClient: blk_1107222019_38144502 
> READ 129500B in 7ms 17.6MB/s
> 2017-11-27 19:01:04,141 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792057_86833357 WRITE 131072B in 10ms 12.5MB/s
> 2017-11-27 19:01:14,219 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792069_86833369 WRITE 131072B in 12ms 10.4MB/s
> 2017-11-27 19:01:24,282 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792081_86833381 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:34,330 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792093_86833393 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:44,408 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792105_86833405 WRITE 131072B in 11ms 11.4MB/s
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-09 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13363:
--
Status: Patch Available  (was: Open)

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-09 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431157#comment-16431157
 ] 

Gabor Bota commented on HDFS-13363:
---

Thank you for the valuable input [~xiaochen]! Based on the information you 
provided I submitted a new patch, and also learned about the system.

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-09 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13363:
--
Status: Open  (was: Patch Available)

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-04-09 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HDFS-13363:
--
Attachment: HDFS-13363.003.patch

> Record file path when FSDirAclOp throws AclException
> 
>
> Key: HDFS-13363
> URL: https://issues.apache.org/jira/browse/HDFS-13363
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HDFS-13363.001.patch, HDFS-13363.002.patch, 
> HDFS-13363.003.patch
>
>
> When AclTransformation methods throws AclException, it does not record the 
> file path that has the exception. Therefore even if it throws an exception, 
> we would never know which file has those invalid ACLs.
>  
> These AclTransformation methods are invoked in FSDirAclOp methods, which know 
> the file path. These FSDirAclOp methods can catch AclException, and then add 
> the file path in the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240 (move to a different branch)

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431155#comment-16431155
 ] 

genericqa commented on HDFS-13415:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-dist . hadoop-cblock {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} server in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} tools in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
31s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 16s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
43s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Updated] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13416:
--
Description: 
java.lang.IllegalArgumentException: Invalid UUID string: h0

at java.util.UUID.fromString(UUID.java:194)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

 

This is happening after this change HDFS-13300 

cc [~nandakumar131]

  was:
java.lang.IllegalArgumentException: Invalid UUID string: h0

at java.util.UUID.fromString(UUID.java:194)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 

[jira] [Updated] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13416:
--
Description: 
java.lang.IllegalArgumentException: Invalid UUID string: h0

at java.util.UUID.fromString(UUID.java:194)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
at 
org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

 

This is happening after this change HDFS-13300

 

cc [~nandakumar131]

  was:
java.lang.IllegalArgumentException: Invalid UUID string: h0

at java.util.UUID.fromString(UUID.java:194)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
 at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
 at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
 at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
 at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 

[jira] [Created] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13416:
-

 Summary: TestNodeManager tests fail
 Key: HDFS-13416
 URL: https://issues.apache.org/jira/browse/HDFS-13416
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham


java.lang.IllegalArgumentException: Invalid UUID string: h0

at java.util.UUID.fromString(UUID.java:194)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
 at 
org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
 at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
 at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
 at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
 at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
 at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
 at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
 at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
 at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

 

This is happening after this change HDFS-13300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13416) TestNodeManager tests fail

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13416:
--
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-7240

> TestNodeManager tests fail
> --
>
> Key: HDFS-13416
> URL: https://issues.apache.org/jira/browse/HDFS-13416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> java.lang.IllegalArgumentException: Invalid UUID string: h0
> at java.util.UUID.fromString(UUID.java:194)
>  at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68)
>  at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36)
>  at 
> org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416)
>  at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95)
>  at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48)
>  at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719)
>  at 
> org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>  at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>  at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>  at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>  at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>  at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>  
> This is happening after this change HDFS-13300



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13354) Add config for min percentage of data nodes to come out of chill mode in SCM

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13354:
--
Summary: Add config for min percentage of data nodes to come out of chill 
mode in SCM  (was: Add config for min number of data nodes to come out of chill 
mode in SCM)

> Add config for min percentage of data nodes to come out of chill mode in SCM
> 
>
> Key: HDFS-13354
> URL: https://issues.apache.org/jira/browse/HDFS-13354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13354-HDFS-7240.00.patch, 
> HDFS-13354-HDFS-7240.01.patch
>
>
> SCM will come out of ChillMode if one datanode reports in now. We need to 
> support percentage of known datanodes before SCM comes out of Chill Mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM

2018-04-09 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431117#comment-16431117
 ] 

Bharat Viswanadham edited comment on HDFS-13354 at 4/9/18 7:44 PM:
---

Attached patch v01, to use percentage of datanodes.

 

[~nandakumar131] [~anu] [~elek] Could you help in reviewing these changes.


was (Author: bharatviswa):
Attached patch v01, to use percentage of datanodes.

 

cc [~nandakumar131] [~anu] [~elek]

> Add config for min number of data nodes to come out of chill mode in SCM
> 
>
> Key: HDFS-13354
> URL: https://issues.apache.org/jira/browse/HDFS-13354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13354-HDFS-7240.00.patch, 
> HDFS-13354-HDFS-7240.01.patch
>
>
> SCM will come out of ChillMode if one datanode reports in now. We need to 
> support percentage of known datanodes before SCM comes out of Chill Mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM

2018-04-09 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431117#comment-16431117
 ] 

Bharat Viswanadham commented on HDFS-13354:
---

Attached patch v01, to use percentage of datanodes.

 

cc [~nandakumar131] [~anu] [~elek]

> Add config for min number of data nodes to come out of chill mode in SCM
> 
>
> Key: HDFS-13354
> URL: https://issues.apache.org/jira/browse/HDFS-13354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13354-HDFS-7240.00.patch, 
> HDFS-13354-HDFS-7240.01.patch
>
>
> SCM will come out of ChillMode if one datanode reports in now. We need to 
> support percentage of known datanodes before SCM comes out of Chill Mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13354:
--
Attachment: HDFS-13354-HDFS-7240.01.patch

> Add config for min number of data nodes to come out of chill mode in SCM
> 
>
> Key: HDFS-13354
> URL: https://issues.apache.org/jira/browse/HDFS-13354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13354-HDFS-7240.00.patch, 
> HDFS-13354-HDFS-7240.01.patch
>
>
> SCM will come out of ChillMode if one datanode reports in now. We need to 
> support percentage of known datanodes before SCM comes out of Chill Mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13354) Add config for min number of data nodes to come out of chill mode in SCM

2018-04-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13354:
--
Description: SCM will come out of ChillMode if one datanode reports in now. 
We need to support percentage of known datanodes before SCM comes out of Chill 
Mode.  (was: SCM will come out of ChillMode if one datanode reports in now. We 
need to support a number of known datanodes before SCM comes out of Chill Mode.)

> Add config for min number of data nodes to come out of chill mode in SCM
> 
>
> Key: HDFS-13354
> URL: https://issues.apache.org/jira/browse/HDFS-13354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13354-HDFS-7240.00.patch
>
>
> SCM will come out of ChillMode if one datanode reports in now. We need to 
> support percentage of known datanodes before SCM comes out of Chill Mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431030#comment-16431030
 ] 

genericqa commented on HDFS-13403:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
59m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
29s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918183/HDFS-13403.000.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 725a072e8a3c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ac32b35 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23840/testReport/ |
| Max. process+thread count | 361 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23840/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-13403.000.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order 

[jira] [Commented] (HDFS-13376) Specify minimum GCC version to avoid TLS support error in Build of hadoop-hdfs-native-client

2018-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430999#comment-16430999
 ] 

Hudson commented on HDFS-13376:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13945 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13945/])
HDFS-13376. Specify minimum GCC version to avoid TLS support error in 
(james.clampffer: rev 905937678577fc0deb57489590863464562088ad)
* (edit) BUILDING.txt


> Specify minimum GCC version to avoid TLS support error in Build of 
> hadoop-hdfs-native-client
> 
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch, HDFS-13376.002.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430977#comment-16430977
 ] 

genericqa commented on HDFS-13384:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m  
7s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918192/HDFS-13384.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ee442c7df5e7 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ac32b35 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23843/testReport/ |
| Max. process+thread count | 1001 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23843/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: 

[jira] [Updated] (HDFS-13376) Specify minimum GCC version to avoid TLS support error in Build of hadoop-hdfs-native-client

2018-04-09 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-13376:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've committed this to trunk.  Thanks for your contribution [~GeLiXin]!

> Specify minimum GCC version to avoid TLS support error in Build of 
> hadoop-hdfs-native-client
> 
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch, HDFS-13376.002.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430946#comment-16430946
 ] 

Xiao Chen commented on HDFS-13328:
--

+1. Thanks Rakesh and Surendra!

> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch, 
> HDFS-13328-03.patch, HDFS-13328-04.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13410) RBF: Support federation with no subclusters

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430942#comment-16430942
 ] 

genericqa commented on HDFS-13410:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
52s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13410 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918184/HDFS-13410.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b0c3eda60a6a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ac32b35 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23841/testReport/ |
| Max. process+thread count | 953 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23841/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Support federation with no subclusters
> ---
>
> Key: HDFS-13410
> URL: 

[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-09 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430941#comment-16430941
 ] 

Plamen Jeliazkov commented on HDFS-13399:
-

Thanks [~xkrogen] – I've taken your changes into a new patch.

I have also added a new unit test, {{TestStateAlignmentContextWithHA}}, to 
showcase DFSClient retaining its AlignmentContext across NameNode failover 
transitions.

If anyone has suggestions on how to provide additional tests or enhance 
existing ones I am open to suggestions.

I have also addressed the whitespace and checkstyle issues. There are some 
checkstyles that could not be addressed due to method overloading creating new 
methods that have more than 7 parameters.

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13399) Make Client field AlignmentContext non-static.

2018-04-09 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-13399:

Attachment: HDFS-13399-HDFS-12943.001.patch

> Make Client field AlignmentContext non-static.
> --
>
> Key: HDFS-13399
> URL: https://issues.apache.org/jira/browse/HDFS-13399
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-12943
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Major
> Attachments: HDFS-13399-HDFS-12943.000.patch, 
> HDFS-13399-HDFS-12943.001.patch
>
>
> In HDFS-12977, DFSClient's constructor was altered to make use of a new 
> static method in Client that allowed one to set an AlignmentContext. This 
> work is to remove that static field and make each DFSClient pass it's 
> AlignmentContext down to the proxy Call level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13376) Specify minimum GCC version to avoid TLS support error in Build of hadoop-hdfs-native-client

2018-04-09 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-13376:
---
Summary: Specify minimum GCC version to avoid TLS support error in Build of 
hadoop-hdfs-native-client  (was: TLS support error in Native Build of 
hadoop-hdfs-native-client)

> Specify minimum GCC version to avoid TLS support error in Build of 
> hadoop-hdfs-native-client
> 
>
> Key: HDFS-13376
> URL: https://issues.apache.org/jira/browse/HDFS-13376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation, native
>Affects Versions: 3.1.0
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDFS-13376.001.patch, HDFS-13376.002.patch
>
>
> mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package 
> -Pdist,native -DskipTests -Dtar
> {noformat}
> [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message):
>  [exec]   FATAL ERROR: The required feature thread_local storage is not 
> supported by
>  [exec]   your compiler.  Known compilers that support this feature: GCC, 
> Visual
>  [exec]   Studio, Clang (community version), Clang (version for iOS 9 and 
> later).
>  [exec]
>  [exec]
>  [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed
>  [exec] -- Configuring incomplete, errors occurred!
> {noformat}
> My environment:
> Linux: Red Hat 4.4.7-3
> cmake: 3.8.2
> java: 1.8.0_131
> gcc: 4.4.7
> maven: 3.5.0
> Seems this is because the low version of gcc, will report after confirming 
> it. 
> Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc 
> version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit

2018-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430937#comment-16430937
 ] 

Hudson commented on HDFS-13380:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13944 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13944/])
HDFS-13380. RBF: mv/rm fail after the directory exceeded the quota (inigoiri: 
rev e9b9f48dad5ebb58ee529f918723089e8356c480)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java


> RBF: mv/rm fail after the directory exceeded the quota limit
> 
>
> Key: HDFS-13380
> URL: https://issues.apache.org/jira/browse/HDFS-13380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.4
>
> Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch
>
>
> It's always fail when I try to mv/rm a directory which have exceeded the 
> quota limit.
> {code:java}
> [hadp@hadoop]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-]
> [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: 
> The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> [hadp@hadoop]$ hdfs dfs -rm -skipTrash 
> hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> {code}
> I think we should add a parameter for the method *getLocationsForPath,* to 
> determine if we need to perform quota verification on the operation. For 
> example mv src directory parameter and rm directory parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit

2018-04-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13380:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

> RBF: mv/rm fail after the directory exceeded the quota limit
> 
>
> Key: HDFS-13380
> URL: https://issues.apache.org/jira/browse/HDFS-13380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.4
>
> Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch
>
>
> It's always fail when I try to mv/rm a directory which have exceeded the 
> quota limit.
> {code:java}
> [hadp@hadoop]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-]
> [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: 
> The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> [hadp@hadoop]$ hdfs dfs -rm -skipTrash 
> hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> {code}
> I think we should add a parameter for the method *getLocationsForPath,* to 
> determine if we need to perform quota verification on the operation. For 
> example mv src directory parameter and rm directory parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit

2018-04-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430881#comment-16430881
 ] 

Íñigo Goiri commented on HDFS-13380:


Thanks [~wuweiwei] for testing and reviewing.
+1
Committing  [^HDFS-13380.002.patch] .

Thanks [~linyiqun] for working on this.

> RBF: mv/rm fail after the directory exceeded the quota limit
> 
>
> Key: HDFS-13380
> URL: https://issues.apache.org/jira/browse/HDFS-13380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch
>
>
> It's always fail when I try to mv/rm a directory which have exceeded the 
> quota limit.
> {code:java}
> [hadp@hadoop]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-]
> [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: 
> The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> [hadp@hadoop]$ hdfs dfs -rm -skipTrash 
> hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> {code}
> I think we should add a parameter for the method *getLocationsForPath,* to 
> determine if we need to perform quota verification on the operation. For 
> example mv src directory parameter and rm directory parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430879#comment-16430879
 ] 

Íñigo Goiri commented on HDFS-13386:


The unit test seems to run successfully in less than 0.2 seconds so we are good 
here.
Couple minor style comments to  [^HDFS-13386-004.patch]:
* Use Time.now() for {{beforeCreatingTIme}}
* Capitalization typo in {{beforeCreatingTIme}}
* {{requiredPaths}} should be named something like {{pathModTime}}.
* I'm not sure is needed to do a listStatus and a getPartialListing, shouldn't 
we be able to check the dates directly?
{code}
190 // Match date/time for each path returned
191 for(HdfsFileStatus f : listing.getPartialListing()) {
193   String currentFile = f.getFullPath(new Path("/")).getName();
194   long modTime = f.getModificationTime();
196 
197   assertEquals(currentFile, fileName);
198   assertTrue(modTime > t0);
199 }
{code}
We could also check that the number of entries is the expected.
Right now it looks a little bit complicated for what we want to check which is 
basically that the time of the new files/folders/mount table entries is bigger 
than the initial time.
We could even just check that is small that t0 + 10 seconds say and put a 
timeout of 10 seconds.

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, 
> HDFS-13386-004.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, 
> image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430856#comment-16430856
 ] 

Íñigo Goiri commented on HDFS-13384:


Thanks [~linyiqun] for the comments.
I went through the unit test in  [^HDFS-13384.004.patch] and I think now it's 
much cleaner.

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch, 
> HDFS-13384.002.patch, HDFS-13384.003.patch, HDFS-13384.004.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism

2018-04-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13384:
---
Attachment: HDFS-13384.004.patch

> RBF: Improve timeout RPC call mechanism
> ---
>
> Key: HDFS-13384
> URL: https://issues.apache.org/jira/browse/HDFS-13384
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13384.000.patch, HDFS-13384.001.patch, 
> HDFS-13384.002.patch, HDFS-13384.003.patch, HDFS-13384.004.patch
>
>
> When issuing RPC requests to subclusters, we have a time out mechanism 
> introduced in HDFS-12273. We need to improve this is handled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-04-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428527#comment-16428527
 ] 

Xiao Chen edited comment on HDFS-13056 at 4/9/18 4:42 PM:
--

Casting my official +1 on this, will let it float for a few days in case Steve 
or other watchers want to review. Will commit on Tuesday if no further comments.

[~dennishuo], please make sure to consider Steve's comment about DFSClient in 
the webhdfs subtask, to deprecate methods instead of simply remove.


was (Author: xiaochen):
Casting my official +1 on this, will let it float for a few days in case Steve 
or other watchers want to review. Will commit on Tuesday if further comments.

[~dennishuo], please make sure to consider Steve's comment about DFSClient in 
the webhdfs subtask, to deprecate methods instead of simply remove.

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, 
> HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, 
> HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, 
> HDFS-13056.014.patch, Reference_only_zhen_PPOC_hadoop2.6.X.diff, 
> hdfs-file-composite-crc32-v1.pdf, hdfs-file-composite-crc32-v2.pdf, 
> hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-04-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430851#comment-16430851
 ] 

Xiao Chen commented on HDFS-13056:
--

Thanks [~ste...@apache.org]. I think we're on the same page: understanding 
their need, making use of Hadoop APIs should follow HADOOP-12805's practice. 
This would make the official 'contract'. :)

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, 
> HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, 
> HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, 
> HDFS-13056.014.patch, Reference_only_zhen_PPOC_hadoop2.6.X.diff, 
> hdfs-file-composite-crc32-v1.pdf, hdfs-file-composite-crc32-v2.pdf, 
> hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430849#comment-16430849
 ] 

Hudson commented on HDFS-13388:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13943 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13943/])
HDFS-13388. RequestHedgingProxyProvider calls multiple configured NNs 
(inigoiri: rev ac32b3576da4cc463dff85118163ccfff02215fc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRequestHedgingProxyProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java


> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240 (move to a different branch)

2018-04-09 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13415:

Status: Patch Available  (was: Open)

After the subproject refactor it's almost trivial. I deleted the subprojects 
and remove the references in pom.xml files and dist-layout stitching.

Acceptance tests are passed:

{code}
cd hadoop-ozone/acceptance-test
mvn clean integration-test -Pozone-acceptance-test,dist -DskipTests

==
Acceptance
==
Acceptance.Ozone :: Smoke test to start cluster with docker-compose environ...
==
Daemons are running without error | PASS |
--
Check if datanode is connected to the scm | PASS |
--
Scale it up to 5 datanodes| PASS |
--
Test rest interface   | PASS |
--
Test ozone cli| PASS |
--
Check webui static resources  | PASS |
--
Start freon testing   | PASS |
--
Acceptance.Ozone :: Smoke test to start cluster with docker-compos... | PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
Acceptance| PASS |
7 critical tests, 7 passed, 0 failed
7 tests total, 7 passed, 0 failed
==
Output:  
/home/elek/projects/hadoop/hadoop-ozone/acceptance-test/target/robotframework-reports/output.xml
XUnit:   
/home/elek/projects/hadoop/hadoop-ozone/acceptance-test/target/robotframework-reports/TEST-acceptance.xml
Log: 
/home/elek/projects/hadoop/hadoop-ozone/acceptance-test/target/robotframework-reports/log.html
Report:  
/home/elek/projects/hadoop/hadoop-ozone/acceptance-test/target/robotframework-reports/report.html
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 03:33 min
[INFO] Finished at: 2018-04-09T18:26:26+02:00
[INFO] Final Memory: 57M/389M
[INFO] 
{code}

> Ozone: Remove cblock code from HDFS-7240 (move to a different branch)
> -
>
> Key: HDFS-13415
> URL: https://issues.apache.org/jira/browse/HDFS-13415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13415-HDFS-7240.001.patch
>
>
> The Ozone components (hdds/ozone) and cblock components (cblock) has 
> different stability. We suggest to separated the development: Ozone could 
> remain on HDFS-7240 and could be merged to the trunk as voted by the 
> community.
> The cblock development could be kept on a separated feature branch 
> (HDFS-8) and could be developed and merged independently.
> To achieve this we 
>  1. need to remove the cblock code from HDFS-7240 branch (this is what this 
> jira about)
>  2. Create a new branch from the latest HDFS-7240 which contains the cblock 
> server (not in this jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240 (move to a different branch)

2018-04-09 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13415:

Attachment: HDFS-13415-HDFS-7240.001.patch

> Ozone: Remove cblock code from HDFS-7240 (move to a different branch)
> -
>
> Key: HDFS-13415
> URL: https://issues.apache.org/jira/browse/HDFS-13415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13415-HDFS-7240.001.patch
>
>
> The Ozone components (hdds/ozone) and cblock components (cblock) has 
> different stability. We suggest to separated the development: Ozone could 
> remain on HDFS-7240 and could be merged to the trunk as voted by the 
> community.
> The cblock development could be kept on a separated feature branch 
> (HDFS-8) and could be developed and merged independently.
> To achieve this we 
>  1. need to remove the cblock code from HDFS-7240 branch (this is what this 
> jira about)
>  2. Create a new branch from the latest HDFS-7240 which contains the cblock 
> server (not in this jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240 (move to a different branch)

2018-04-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-13415:
---

 Summary: Ozone: Remove cblock code from HDFS-7240 (move to a 
different branch)
 Key: HDFS-13415
 URL: https://issues.apache.org/jira/browse/HDFS-13415
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Elek, Marton
Assignee: Elek, Marton


The Ozone components (hdds/ozone) and cblock components (cblock) has different 
stability. We suggest to separated the development: Ozone could remain on 
HDFS-7240 and could be merged to the trunk as voted by the community.

The cblock development could be kept on a separated feature branch (HDFS-8) 
and could be developed and merged independently.

To achieve this we 

 1. need to remove the cblock code from HDFS-7240 branch (this is what this 
jira about)
 2. Create a new branch from the latest HDFS-7240 which contains the cblock 
server (not in this jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13410) RBF: Support federation with no subclusters

2018-04-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430812#comment-16430812
 ] 

Íñigo Goiri commented on HDFS-13410:


Thanks [~linyiqun] for the comments; tackled them in  [^HDFS-13410.002.patch].

> RBF: Support federation with no subclusters
> ---
>
> Key: HDFS-13410
> URL: https://issues.apache.org/jira/browse/HDFS-13410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13410.000.patch, HDFS-13410.001.patch, 
> HDFS-13410.002.patch
>
>
> If the federation has no subclusters the logs have long stack traces. Even 
> though this is not a regular setup for RBF, we should trigger log message.
> An example:
> {code}
> Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.LinkedList.checkElementIndex(LinkedList.java:555)
>   at java.util.LinkedList.get(LinkedList.java:476)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1028)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getDatanodeReport(RouterRpcServer.java:1264)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics.getNodeUsage(FederationMetrics.java:424)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13410) RBF: Support federation with no subclusters

2018-04-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13410:
---
Attachment: HDFS-13410.002.patch

> RBF: Support federation with no subclusters
> ---
>
> Key: HDFS-13410
> URL: https://issues.apache.org/jira/browse/HDFS-13410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-13410.000.patch, HDFS-13410.001.patch, 
> HDFS-13410.002.patch
>
>
> If the federation has no subclusters the logs have long stack traces. Even 
> though this is not a regular setup for RBF, we should trigger log message.
> An example:
> {code}
> Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.LinkedList.checkElementIndex(LinkedList.java:555)
>   at java.util.LinkedList.get(LinkedList.java:476)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1028)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getDatanodeReport(RouterRpcServer.java:1264)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics.getNodeUsage(FederationMetrics.java:424)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-04-09 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430805#comment-16430805
 ] 

James Clampffer commented on HDFS-13403:


Attached a patch.  Not a whole lot going on other than making uses of 
asio::io_service in arguments and member variables instances of 
shared_ptr.

Since this already touched a lot of files I cleaned up a few things as I went:
-Clear out includes in headers that weren't necessary.
-Pull a few method implementations out of header files.
-Rename the MutableBuffers typedef to MutableBuffer to avoid confusion with 
BufferSequences which aren't supported in the FileSystem API yet.


> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-13403.000.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13388:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time

2018-04-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430799#comment-16430799
 ] 

Íñigo Goiri commented on HDFS-13388:


Thanks [~LiJinglun] for the patch.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> RequestHedgingProxyProvider calls multiple configured NNs all the time
> --
>
> Key: HDFS-13388
> URL: https://issues.apache.org/jira/browse/HDFS-13388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, 
> HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, 
> HADOOP-13388.0006.patch
>
>
> In HDFS-7858 RequestHedgingProxyProvider was designed to "first 
> simultaneously call multiple configured NNs to decide which is the active 
> Namenode and then for subsequent calls it will invoke the previously 
> successful NN ." But the current code call multiple configured NNs every time 
> even when we already got the successful NN. 
>  That's because in RetryInvocationHandler.java, ProxyDescriptor's member 
> proxyInfo is assigned only when it is constructed or when failover occurs. 
> RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the 
> only proxy we can get is always a dynamic proxy handled by 
> RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class 
> handles invoked method by calling multiple configured NNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-04-09 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-13403:
---
Attachment: HDFS-13403.000.patch

> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-13403.000.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-04-09 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-13403:
---
Status: Patch Available  (was: Open)

> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-13403.000.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13414) Ozone: Update existing Ozone documentation according to the recent changes

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430780#comment-16430780
 ] 

genericqa commented on HDFS-13414:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
7s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
23s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | HDFS-13414 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918172/HDFS-13414-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 185dc74181e6 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 8475d6b |
| maven | version: Apache Maven 3.3.9 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23839/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23839/artifact/out/patch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23839/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 398 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23839/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Update existing Ozone documentation according to the recent changes
> --
>
> Key: HDFS-13414
> URL: https://issues.apache.org/jira/browse/HDFS-13414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13414-HDFS-7240.001.patch
>
>
> 1. Datanode port has been changed
> 2. remove the references to the branch (prepare to merge)
> 3. CLI commands are changed (eg. ozone scm)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit

2018-04-09 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430759#comment-16430759
 ] 

Weiwei Wu commented on HDFS-13380:
--

I have test this  patch in my cluster, rm and mv operation work fine, LGTM

> RBF: mv/rm fail after the directory exceeded the quota limit
> 
>
> Key: HDFS-13380
> URL: https://issues.apache.org/jira/browse/HDFS-13380
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13380.001.patch, HDFS-13380.002.patch
>
>
> It's always fail when I try to mv/rm a directory which have exceeded the 
> quota limit.
> {code:java}
> [hadp@hadoop]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-]
> [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: 
> The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> [hadp@hadoop]$ hdfs dfs -rm -skipTrash 
> hdfs://ns-fed/ns10t/ns1mountpoint/aa.99
> rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file 
> count=1201
> {code}
> I think we should add a parameter for the method *getLocationsForPath,* to 
> determine if we need to perform quota verification on the operation. For 
> example mv src directory parameter and rm directory parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13414) Ozone: Update existing Ozone documentation according to the recent changes

2018-04-09 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13414:

Attachment: HDFS-13414-HDFS-7240.001.patch

> Ozone: Update existing Ozone documentation according to the recent changes
> --
>
> Key: HDFS-13414
> URL: https://issues.apache.org/jira/browse/HDFS-13414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13414-HDFS-7240.001.patch
>
>
> 1. Datanode port has been changed
> 2. remove the references to the branch (prepare to merge)
> 3. CLI commands are changed (eg. ozone scm)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13414) Ozone: Update existing Ozone documentation according to the recent changes

2018-04-09 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-13414:

Status: Patch Available  (was: Open)

> Ozone: Update existing Ozone documentation according to the recent changes
> --
>
> Key: HDFS-13414
> URL: https://issues.apache.org/jira/browse/HDFS-13414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HDFS-13414-HDFS-7240.001.patch
>
>
> 1. Datanode port has been changed
> 2. remove the references to the branch (prepare to merge)
> 3. CLI commands are changed (eg. ozone scm)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13414) Ozone: Update existing Ozone documentation according to the recent changes

2018-04-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-13414:
---

 Summary: Ozone: Update existing Ozone documentation according to 
the recent changes
 Key: HDFS-13414
 URL: https://issues.apache.org/jira/browse/HDFS-13414
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton


1. Datanode port has been changed
2. remove the references to the branch (prepare to merge)
3. CLI commands are changed (eg. ozone scm)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430594#comment-16430594
 ] 

genericqa commented on HDFS-13386:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13386 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918151/HDFS-13386-004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f182c855c1c6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5700556 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23838/testReport/ |
| Max. process+thread count | 955 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23838/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: 

[jira] [Commented] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430568#comment-16430568
 ] 

genericqa commented on HDFS-13328:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13328 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918073/HDFS-13328-04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3d5d070988a2 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5700556 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23837/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23837/testReport/ |
| Max. process+thread count | 3435 (vs. ulimit of 1) 

[jira] [Updated] (HDFS-10419) Building HDFS on top of new storage layer (HDDS)

2018-04-09 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-10419:

Summary: Building HDFS on top of new storage layer (HDDS)  (was: Building 
HDFS on top of new storage layer (HDSL))

> Building HDFS on top of new storage layer (HDDS)
> 
>
> Key: HDFS-10419
> URL: https://issues.apache.org/jira/browse/HDFS-10419
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Major
> Attachments: Evolving NN using new block-container layer.pdf
>
>
> In HDFS-7240, Ozone defines storage containers to store both the data and the 
> metadata. The storage container layer provides an object storage interface 
> and aims to manage data/metadata in a distributed manner. More details about 
> storage containers can be found in the design doc in HDFS-7240.
> HDFS can adopt the storage containers to store and manage blocks. The general 
> idea is:
> # Each block can be treated as an object and the block ID is the object's key.
> # Blocks will still be stored in DataNodes but as objects in storage 
> containers.
> # The block management work can be separated out of the NameNode and will be 
> handled by the storage container layer in a more distributed way. The 
> NameNode will only manage the namespace (i.e., files and directories).
> # For each file, the NameNode only needs to record a list of block IDs which 
> are used as keys to obtain real data from storage containers.
> # A new DFSClient implementation talks to both NameNode and the storage 
> container layer to read/write.
> HDFS, especially the NameNode, can get much better scalability from this 
> design. Currently the NameNode's heaviest workload comes from the block 
> management, which includes maintaining the block-DataNode mapping, receiving 
> full/incremental block reports, tracking block states (under/over/miss 
> replicated), and joining every writing pipeline protocol to guarantee the 
> data consistency. These work bring high memory footprint and make NameNode 
> suffer from GC. HDFS-5477 already proposes to convert BlockManager as a 
> service. If we can build HDFS on top of the storage container layer, we not 
> only separate out the BlockManager from the NameNode, but also replace it 
> with a new distributed management scheme.
> The storage container work is currently in progress in HDFS-7240, and the 
> work proposed here is still in an experimental/exploring stage. We can do 
> this experiment in a feature branch so that people with interests can be 
> involved.
> A design doc will be uploaded later explaining more details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430516#comment-16430516
 ] 

genericqa commented on HDFS-12794:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} objectstore-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} client in the patch failed. {color} |
| 

[jira] [Updated] (HDFS-13386) RBF: Wrong date information in list file(-ls) result

2018-04-09 Thread Dibyendu Karmakar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13386:
-
Attachment: HDFS-13386-004.patch

> RBF: Wrong date information in list file(-ls) result
> 
>
> Key: HDFS-13386
> URL: https://issues.apache.org/jira/browse/HDFS-13386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Minor
> Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, 
> HDFS-13386-004.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, 
> image-2018-04-03-11-59-51-623.png
>
>
> # hdfs dfs -ls 
> !image-2018-04-03-11-59-51-623.png!
> this is happening because getMountPointDates is not implemented 
> {code:java}
> private Map getMountPointDates(String path) {
> Map ret = new TreeMap<>();
> // TODO add when we have a Mount Table
> return ret;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-04-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430503#comment-16430503
 ] 

genericqa commented on HDFS-12794:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} integration-test in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} objectstore-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} client in the patch failed. {color} |
| 

[jira] [Commented] (HDFS-13333) Ozone: Introduce a new SCM Exception which will be thrown when mandatory property is missing

2018-04-09 Thread LiXin Ge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430460#comment-16430460
 ] 

LiXin Ge commented on HDFS-1:
-

My patch conflict with the Ozone branch after the HDSL renamed to HDDS. Sorry 
for that and will rebase it tomorrow.

> Ozone: Introduce a new SCM Exception which will be thrown when mandatory 
> property is missing 
> -
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-1-HDFS-7240.001.patch, HDFS-1.001.patch
>
>
> It's better to have a separate SCM Exception to indicate a missing mandatory 
> property. This was proposed by [~xyao] in  [this comment| 
> https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408553=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408553]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-04-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430417#comment-16430417
 ] 

Steve Loughran commented on HDFS-13056:
---

bq. in case Steve or other watchers want to review

[~xiaochen]: If you are happy, I'm happy, Tryng to stop the HBase team get at 
the internals is a losing battle: I understand their need, I just wish they'd 
make their requests a bit more public before adopting them, as it generally 
stops HBase running on other filesystems. 

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, 
> HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, 
> HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, 
> HDFS-13056.014.patch, Reference_only_zhen_PPOC_hadoop2.6.X.diff, 
> hdfs-file-composite-crc32-v1.pdf, hdfs-file-composite-crc32-v2.pdf, 
> hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-09 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430385#comment-16430385
 ] 

Rakesh R commented on HDFS-13328:
-

Thanks [~xiaochen] for the useful reviews. Attached another patch on behalf of 
[~surendrasingh], tried addressing the above comment.

> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch, 
> HDFS-13328-03.patch, HDFS-13328-04.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.

2018-04-09 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-13328:

Attachment: HDFS-13328-04.patch

> Abstract ReencryptionHandler recursive logic in separate class.
> ---
>
> Key: HDFS-13328
> URL: https://issues.apache.org/jira/browse/HDFS-13328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13328-01.patch, HDFS-13328-02.patch, 
> HDFS-13328-03.patch, HDFS-13328-04.patch
>
>
> HDFS-10899 added DFS logic to scan a directory. It is good to abstract this 
> logic in separate class, so it can be used in some other feature like 
> SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same 
> can be pushed in trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-04-09 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12794:
---
Attachment: HDFS-12794-HDFS-7240.012.patch

> Ozone: Parallelize ChunkOutputSream Writes to container
> ---
>
> Key: HDFS-12794
> URL: https://issues.apache.org/jira/browse/HDFS-12794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-12794-HDFS-7240.001.patch, 
> HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, 
> HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, 
> HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, 
> HDFS-12794-HDFS-7240.008.patch, HDFS-12794-HDFS-7240.009.patch, 
> HDFS-12794-HDFS-7240.010.patch, HDFS-12794-HDFS-7240.011.patch, 
> HDFS-12794-HDFS-7240.012.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets 
> written, the next chunk write is blocked until the previous chunk is written 
> to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the 
> OutputStream should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-04-09 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430367#comment-16430367
 ] 

Shashikant Banerjee commented on HDFS-12794:


Reuploaded correct patch v12.

> Ozone: Parallelize ChunkOutputSream Writes to container
> ---
>
> Key: HDFS-12794
> URL: https://issues.apache.org/jira/browse/HDFS-12794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-12794-HDFS-7240.001.patch, 
> HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, 
> HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, 
> HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, 
> HDFS-12794-HDFS-7240.008.patch, HDFS-12794-HDFS-7240.009.patch, 
> HDFS-12794-HDFS-7240.010.patch, HDFS-12794-HDFS-7240.011.patch, 
> HDFS-12794-HDFS-7240.012.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets 
> written, the next chunk write is blocked until the previous chunk is written 
> to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the 
> OutputStream should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-04-09 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12794:
---
Attachment: (was: HDFS-12794-HDFS-7240.012.patch)

> Ozone: Parallelize ChunkOutputSream Writes to container
> ---
>
> Key: HDFS-12794
> URL: https://issues.apache.org/jira/browse/HDFS-12794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-12794-HDFS-7240.001.patch, 
> HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, 
> HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, 
> HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, 
> HDFS-12794-HDFS-7240.008.patch, HDFS-12794-HDFS-7240.009.patch, 
> HDFS-12794-HDFS-7240.010.patch, HDFS-12794-HDFS-7240.011.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets 
> written, the next chunk write is blocked until the previous chunk is written 
> to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the 
> OutputStream should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >