[jira] [Commented] (HDFS-14895) Define LOG instead of BlockPlacementPolicy.LOG in DatanodeDescriptor#chooseStorage4Block

2019-10-12 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950226#comment-16950226
 ] 

Lisheng Sun commented on HDFS-14895:


[~ayushtkn] Could you have time to continue to take a review for this patch? 
Thank you.

> Define LOG instead of BlockPlacementPolicy.LOG in 
> DatanodeDescriptor#chooseStorage4Block
> 
>
> Key: HDFS-14895
> URL: https://issues.apache.org/jira/browse/HDFS-14895
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14895.001.patch
>
>
> There is a noisy log with BlockPlacementPolicy.LOG, it's too hard to debug 
> problem. Define LOG instead of it in DatanodeDescriptor#chooseStorage4Block.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2220) HddsVolume needs a toString method

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2220?focusedWorklogId=327423=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327423
 ]

ASF GitHub Bot logged work on HDDS-2220:


Author: ASF GitHub Bot
Created on: 13/Oct/19 04:07
Start Date: 13/Oct/19 04:07
Worklog Time Spent: 10m 
  Work Description: cxorm commented on issue #1652: HDDS-2220. HddsVolume 
needs a toString method.
URL: https://github.com/apache/hadoop/pull/1652#issuecomment-541384255
 
 
   /ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 327423)
Time Spent: 0.5h  (was: 20m)

> HddsVolume needs a toString method
> --
>
> Key: HDDS-2220
> URL: https://issues.apache.org/jira/browse/HDDS-2220
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This is logged to the console of datanodes:
> {code:java}
> 2019-10-01 11:37:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 11:52:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 11:52:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:07:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:07:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:22:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:22:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:37:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:37:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:52:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:52:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a 
> {code}
> Without a proper HddsVolume.toString it's hard to say which volume is 
> checked...
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14887) RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable

2019-10-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950150#comment-16950150
 ] 

Íñigo Goiri commented on HDFS-14887:


I don't think TestDisabledNamespaces is the best place for this. I'd have to 
check carefully but I think there's monitoring tests or even membership. 

> RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable
> --
>
> Key: HDFS-14887
> URL: https://issues.apache.org/jira/browse/HDFS-14887
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: 14887.after.png, 14887.before.png, HDFS-14887.001.patch, 
> HDFS-14887.002.patch, HDFS-14887.003.patch, HDFS-14887.004.patch
>
>
> In Router Web UI, Observer Namenode Information displaying as Unavailable.
> We should show a proper icon for them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14758) Decrease lease hard limit

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950147#comment-16950147
 ] 

Hadoop QA commented on HDFS-14758:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
660 unchanged - 1 fixed = 661 total (was 661) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982858/HDFS-14758.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 75aca9feafd7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-14887) RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950130#comment-16950130
 ] 

Hadoop QA commented on HDFS-14887:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 18s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestDisableNameservices |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14887 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982856/HDFS-14887.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d7aaf93bf8dc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5f4641a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28080/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28080/testReport/ |
| Max. process+thread count | 2639 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Updated] (HDFS-14758) Decrease lease hard limit

2019-10-12 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14758:
-
Attachment: HDFS-14758.002.patch

> Decrease lease hard limit
> -
>
> Key: HDFS-14758
> URL: https://issues.apache.org/jira/browse/HDFS-14758
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: HDFS-14758.001.patch, HDFS-14758.002.patch
>
>
> The hard limit is currently hard-coded to be 1 hour. This also determines the 
> NN automatic lease recovery interval. Something like 20 min will make more 
> sense.
> After the 5 min soft limit, other clients can recover the lease. If no one 
> else takes the lease away, the original client still can renew the lease 
> within the hard limit. So even after a NN full GC of 8 minutes, leases can be 
> still valid.
> However, there is one risk in reducing the hard limit. E.g. Reduced to 20 
> min. If the NN crashes and the manual failover takes more than 20 minutes, 
> clients will abort.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14887) RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950119#comment-16950119
 ] 

Hadoop QA commented on HDFS-14887:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 13s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14887 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982855/HDFS-14887.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 48bf243882f9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5f4641a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28079/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28079/testReport/ |
| Max. process+thread count | 2688 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28079/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Updated] (HDFS-14887) RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable

2019-10-12 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14887:
-
Attachment: HDFS-14887.004.patch

> RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable
> --
>
> Key: HDFS-14887
> URL: https://issues.apache.org/jira/browse/HDFS-14887
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: 14887.after.png, 14887.before.png, HDFS-14887.001.patch, 
> HDFS-14887.002.patch, HDFS-14887.003.patch, HDFS-14887.004.patch
>
>
> In Router Web UI, Observer Namenode Information displaying as Unavailable.
> We should show a proper icon for them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14887) RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable

2019-10-12 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14887:
-
Attachment: HDFS-14887.003.patch

> RBF: In Router Web UI, Observer Namenode Information displaying as Unavailable
> --
>
> Key: HDFS-14887
> URL: https://issues.apache.org/jira/browse/HDFS-14887
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: 14887.after.png, 14887.before.png, HDFS-14887.001.patch, 
> HDFS-14887.002.patch, HDFS-14887.003.patch
>
>
> In Router Web UI, Observer Namenode Information displaying as Unavailable.
> We should show a proper icon for them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14886) In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950095#comment-16950095
 ] 

Hadoop QA commented on HDFS-14886:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14886 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982848/HDFS-14886.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d3578817f31c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e5cd52 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28078/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28078/testReport/ |
| Max. process+thread count | 2809 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Work started] (HDFS-14894) Add balancer parameter to balance top used nodes

2019-10-12 Thread Leon Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14894 started by Leon Gao.
---
> Add balancer parameter to balance top used nodes
> 
>
> Key: HDFS-14894
> URL: https://issues.apache.org/jira/browse/HDFS-14894
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>
> We sometimes see a few of our datanodes reach very high usage (due to various 
> reasons) and we need to reduce their usage in an urgent situation.
> We see two ways to achieve it currently,
> -Calculate and reset balancing threshold.
> -Pick nodes manually according to usage stats and put them in a file and use 
> `-resource` flag.
> However, both of them are not very intuitive or too much manual work in an 
> urgent close-to-outage situation. Add a small feature to automatically pick 
> top used hosts will be a straightforward option, for example 
> `-sourceThreshold 95` to only target datanodes with >95% usage. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"

2019-10-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950076#comment-16950076
 ] 

Hudson commented on HDFS-14238:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17529 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17529/])
HDFS-14238. A log in NNThroughputBenchmark should change log level to 
(ayushsaxena: rev 5f4641a120331d049a55c519a0d15da18c820fed)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java


> A log in NNThroughputBenchmark should  change log level to "INFO" instead of 
> "ERROR"
> 
>
> Key: HDFS-14238
> URL: https://issues.apache.org/jira/browse/HDFS-14238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14238.patch
>
>
> In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString());
> this loglevel should be changed to “LOG.info()” ,since no error occurs here, 
> just tell us namenode log level has changed .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14903) Update access time in toCompleteFile

2019-10-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950072#comment-16950072
 ] 

Ayush Saxena commented on HDFS-14903:
-

Thanx [~littleboy547] for the patch.

Can you extend a UT for the change?

> Update access time in toCompleteFile
> 
>
> Key: HDFS-14903
> URL: https://issues.apache.org/jira/browse/HDFS-14903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: lihanran
>Assignee: lihanran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14903.001.patch
>
>
> when cleate a file,accesstime and modifitime are different



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14903) Update access time in toCompleteFile

2019-10-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14903:

Fix Version/s: (was: 3.3.0)

> Update access time in toCompleteFile
> 
>
> Key: HDFS-14903
> URL: https://issues.apache.org/jira/browse/HDFS-14903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: lihanran
>Assignee: lihanran
>Priority: Major
> Attachments: HDFS-14903.001.patch
>
>
> when cleate a file,accesstime and modifitime are different



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"

2019-10-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14238:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> A log in NNThroughputBenchmark should  change log level to "INFO" instead of 
> "ERROR"
> 
>
> Key: HDFS-14238
> URL: https://issues.apache.org/jira/browse/HDFS-14238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14238.patch
>
>
> In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString());
> this loglevel should be changed to “LOG.info()” ,since no error occurs here, 
> just tell us namenode log level has changed .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"

2019-10-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950071#comment-16950071
 ] 

Ayush Saxena commented on HDFS-14238:
-

+1, Committed to trunk.

Thanx [~shenyinjie] for the contribution!!!

> A log in NNThroughputBenchmark should  change log level to "INFO" instead of 
> "ERROR"
> 
>
> Key: HDFS-14238
> URL: https://issues.apache.org/jira/browse/HDFS-14238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HDFS-14238.patch
>
>
> In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString());
> this loglevel should be changed to “LOG.info()” ,since no error occurs here, 
> just tell us namenode log level has changed .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2220) HddsVolume needs a toString method

2019-10-12 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-2220:
---
Status: Patch Available  (was: In Progress)

> HddsVolume needs a toString method
> --
>
> Key: HDDS-2220
> URL: https://issues.apache.org/jira/browse/HDDS-2220
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is logged to the console of datanodes:
> {code:java}
> 2019-10-01 11:37:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 11:52:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 11:52:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:07:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:07:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:22:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:22:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:37:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:37:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:52:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:52:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a 
> {code}
> Without a proper HddsVolume.toString it's hard to say which volume is 
> checked...
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2220) HddsVolume needs a toString method

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2220?focusedWorklogId=327340=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327340
 ]

ASF GitHub Bot logged work on HDDS-2220:


Author: ASF GitHub Bot
Created on: 12/Oct/19 14:58
Start Date: 12/Oct/19 14:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1652: HDDS-2220. 
HddsVolume needs a toString method.
URL: https://github.com/apache/hadoop/pull/1652#issuecomment-541332342
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 37 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 873 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 975 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 58 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 714 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2379 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1652 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d67a9c4cba4d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6e5cd52 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1652/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Updated] (HDFS-14886) In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec

2019-10-12 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14886:
-
Attachment: HDFS-14886.003.patch

> In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec
> 
>
> Key: HDFS-14886
> URL: https://issues.apache.org/jira/browse/HDFS-14886
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14886.001.patch, HDFS-14886.002.patch, 
> HDFS-14886.003.patch, HDFS-14886_After.png, HDFS-14886_before.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2284) XceiverClientMetrics should be initialised as part of XceiverClientManager constructor

2019-10-12 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2284:
--

Assignee: YiSheng Lien

> XceiverClientMetrics should be initialised as part of XceiverClientManager 
> constructor
> --
>
> Key: HDDS-2284
> URL: https://issues.apache.org/jira/browse/HDDS-2284
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: YiSheng Lien
>Priority: Major
>
> XceiverClientMetrics is currently initialized in the read write path, the 
> metric should be initialized while creating XceiverClientManager



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2220) HddsVolume needs a toString method

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2220?focusedWorklogId=327337=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327337
 ]

ASF GitHub Bot logged work on HDDS-2220:


Author: ASF GitHub Bot
Created on: 12/Oct/19 14:09
Start Date: 12/Oct/19 14:09
Worklog Time Spent: 10m 
  Work Description: cxorm commented on pull request #1652: HDDS-2220. 
HddsVolume needs a toString method.
URL: https://github.com/apache/hadoop/pull/1652
 
 
   Override toString to show the path of HddsVolume.
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 327337)
Remaining Estimate: 0h
Time Spent: 10m

> HddsVolume needs a toString method
> --
>
> Key: HDDS-2220
> URL: https://issues.apache.org/jira/browse/HDDS-2220
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is logged to the console of datanodes:
> {code:java}
> 2019-10-01 11:37:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 11:52:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 11:52:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:07:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:07:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:22:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:22:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:37:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:37:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:52:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:52:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a 
> {code}
> Without a proper HddsVolume.toString it's hard to say which volume is 
> checked...
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2220) HddsVolume needs a toString method

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2220:
-
Labels: newbie pull-request-available  (was: newbie)

> HddsVolume needs a toString method
> --
>
> Key: HDDS-2220
> URL: https://issues.apache.org/jira/browse/HDDS-2220
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>
> This is logged to the console of datanodes:
> {code:java}
> 2019-10-01 11:37:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 11:52:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 11:52:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:07:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:07:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:22:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:22:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:37:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:37:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:52:59 INFO  ThrottledAsyncChecker:139 - Scheduling a check for 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a
> 2019-10-01 12:52:59 INFO  HddsVolumeChecker:202 - Scheduled health check for 
> volume org.apache.hadoop.ozone.container.common.volume.HddsVolume@5460cf3a 
> {code}
> Without a proper HddsVolume.toString it's hard to say which volume is 
> checked...
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14905) Backport HDFS persistent memory read cache support to branch-3.2

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950016#comment-16950016
 ] 

Hadoop QA commented on HDFS-14905:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
58s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
30s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
32s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
19s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
33s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 12s{color} 
| {color:red} root generated 3 new + 1322 unchanged - 3 fixed = 1325 total (was 
1325) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} root: The patch generated 0 new + 781 unchanged - 11 
fixed = 781 total (was 792) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}152m 31s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}305m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason 

[jira] [Commented] (HDFS-14271) [SBN read] StandbyException is logged if Observer is the first NameNode

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950014#comment-16950014
 ] 

Hadoop QA commented on HDFS-14271:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14271 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982838/HDFS-14271_1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 641868cc16a5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e5cd52 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28077/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28077/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=327290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327290
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 12/Oct/19 10:44
Start Date: 12/Oct/19 10:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1650: HDDS-2034. Async 
RATIS pipeline creation and destroy through datanode…
URL: https://github.com/apache/hadoop/pull/1650#issuecomment-541312850
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 179 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 94 | Maven dependency ordering for branch |
   | -1 | mvninstall | 67 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 55 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-hdds in trunk failed. |
   | -1 | compile | 19 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1164 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 25 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1284 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 44 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 41 | hadoop-ozone in the patch failed. |
   | -1 | compile | 27 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | cc | 27 | hadoop-hdds in the patch failed. |
   | -1 | cc | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 27 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 805 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3087 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1650 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux f338ff149963 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6e5cd52 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 

[jira] [Commented] (HDFS-14384) When lastLocatedBlock token expire, it will take 1~3s second to refetch it.

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1694#comment-1694
 ] 

Hadoop QA commented on HDFS-14384:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
54 unchanged - 0 fixed = 56 total (was 54) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982833/HDFS-14384.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb65973e489c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |

[jira] [Created] (HDDS-2288) Delete hadoop-ozone and hadoop-hdds subprojects from apache trunk

2019-10-12 Thread Marton Elek (Jira)
Marton Elek created HDDS-2288:
-

 Summary: Delete hadoop-ozone and hadoop-hdds subprojects from 
apache trunk
 Key: HDDS-2288
 URL: https://issues.apache.org/jira/browse/HDDS-2288
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Marton Elek
Assignee: Marton Elek


As described in the HDDS-2287 ozone/hdds sources are moving to the 
apache/hadoop-ozone git repository.

All the remaining ozone/hdds files can be removed from trunk (including hdds 
profile in main pom.xml)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14271) [SBN read] StandbyException is logged if Observer is the first NameNode

2019-10-12 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HDFS-14271:
---
Assignee: Shen Yinjie
  Status: Patch Available  (was: Open)

Attach a simple fix for retry exception log 

> [SBN read] StandbyException is logged if Observer is the first NameNode
> ---
>
> Key: HDFS-14271
> URL: https://issues.apache.org/jira/browse/HDFS-14271
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Shen Yinjie
>Priority: Minor
> Attachments: HDFS-14271_1.patch
>
>
> If I transition the first NameNode into Observer state, and then I create a 
> file from command line, it prints the following StandbyException log message, 
> as if the command failed. But it actually completed successfully:
> {noformat}
> [root@weichiu-sbsr-1 ~]# hdfs dfs -touchz /tmp/abf
> 19/02/12 16:35:17 INFO retry.RetryInvocationHandler: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state observer. Visit 
> https://s.apache.org/sbnn-error
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1987)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1424)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:918)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:853)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2782)
> , while invoking $Proxy4.create over 
> [weichiu-sbsr-1.gce.cloudera.com/172.31.121.145:8020,weichiu-sbsr-2.gce.cloudera.com/172.31.121.140:8020].
>  Trying to failover immediately.
> {noformat}
> This is unlike the case when the first NameNode is the Standby, where this 
> StandbyException is suppressed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=327287=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327287
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 12/Oct/19 09:57
Start Date: 12/Oct/19 09:57
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1650: HDDS-2034. Async 
RATIS pipeline creation and destroy through datanode…
URL: https://github.com/apache/hadoop/pull/1650#issuecomment-541308927
 
 
   Rebased  on trunk.
   @lokeshj1703, @anuengineer , @xiaoyuyao, would you help review the patch at 
your convenient time? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 327287)
Time Spent: 12.5h  (was: 12h 20m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14271) [SBN read] StandbyException is logged if Observer is the first NameNode

2019-10-12 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HDFS-14271:
---
Attachment: HDFS-14271_1.patch

> [SBN read] StandbyException is logged if Observer is the first NameNode
> ---
>
> Key: HDFS-14271
> URL: https://issues.apache.org/jira/browse/HDFS-14271
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-14271_1.patch
>
>
> If I transition the first NameNode into Observer state, and then I create a 
> file from command line, it prints the following StandbyException log message, 
> as if the command failed. But it actually completed successfully:
> {noformat}
> [root@weichiu-sbsr-1 ~]# hdfs dfs -touchz /tmp/abf
> 19/02/12 16:35:17 INFO retry.RetryInvocationHandler: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category WRITE is not supported in state observer. Visit 
> https://s.apache.org/sbnn-error
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1987)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1424)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:762)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:458)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:918)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:853)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2782)
> , while invoking $Proxy4.create over 
> [weichiu-sbsr-1.gce.cloudera.com/172.31.121.145:8020,weichiu-sbsr-2.gce.cloudera.com/172.31.121.140:8020].
>  Trying to failover immediately.
> {noformat}
> This is unlike the case when the first NameNode is the Standby, where this 
> StandbyException is suppressed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=327286=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327286
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 12/Oct/19 09:51
Start Date: 12/Oct/19 09:51
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1650: HDDS-2034. 
Async RATIS pipeline creation and destroy through datanode…
URL: https://github.com/apache/hadoop/pull/1650
 
 
   … heartbeat commands.
   
   Old PR link where all previous comments are hosted. 
   https://github.com/apache/hadoop/pull/1469
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 327286)
Time Spent: 12h 20m  (was: 12h 10m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 12h 20m
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=327285=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327285
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 12/Oct/19 09:48
Start Date: 12/Oct/19 09:48
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 327285)
Time Spent: 12h 10m  (was: 12h)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949984#comment-16949984
 ] 

Hadoop QA commented on HDFS-14646:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 75 unchanged - 12 fixed = 75 total (was 87) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14646 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982832/HDFS-14646.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a0bfb1dc2c9d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c561a70 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28074/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949960#comment-16949960
 ] 

Hudson commented on HDFS-14899:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17528 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17528/])
HDFS-14899. Use Relative URLS in Hadoop HDFS RBF. Contributed by David 
(ayushsaxena: rev 6e5cd5273f1107635867ee863cb0e17ef7cc4afa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js


> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14899:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949957#comment-16949957
 ] 

Ayush Saxena commented on HDFS-14899:
-

Committed to trunk.

Thanx [~belugabehr] for the contribution and [~elgoiri] for the review!!!

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=327248=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327248
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 12/Oct/19 07:21
Start Date: 12/Oct/19 07:21
Worklog Time Spent: 10m 
  Work Description: cxorm commented on issue #1559: HDDS-1737. Add Volume 
check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#issuecomment-541295619
 
 
   Thanks @bharatviswa504 
   The unit test can pass on my machine.
   So I want to test my PR again to check the issue
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 327248)
Time Spent: 3h 50m  (was: 3h 40m)

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=327249=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327249
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 12/Oct/19 07:21
Start Date: 12/Oct/19 07:21
Worklog Time Spent: 10m 
  Work Description: cxorm commented on issue #1559: HDDS-1737. Add Volume 
check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#issuecomment-541295628
 
 
   /test
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 327249)
Time Spent: 4h  (was: 3h 50m)

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14886) In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949944#comment-16949944
 ] 

Hadoop QA commented on HDFS-14886:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-14886 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14886 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28075/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec
> 
>
> Key: HDFS-14886
> URL: https://issues.apache.org/jira/browse/HDFS-14886
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14886.001.patch, HDFS-14886.002.patch, 
> HDFS-14886_After.png, HDFS-14886_before.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14886) In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec

2019-10-12 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949943#comment-16949943
 ] 

Surendra Singh Lilhore commented on HDFS-14886:
---

+1 LGTM

Triggered build again

> In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec
> 
>
> Key: HDFS-14886
> URL: https://issues.apache.org/jira/browse/HDFS-14886
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14886.001.patch, HDFS-14886.002.patch, 
> HDFS-14886_After.png, HDFS-14886_before.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14384) When lastLocatedBlock token expire, it will take 1~3s second to refetch it.

2019-10-12 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949940#comment-16949940
 ] 

Surendra Singh Lilhore commented on HDFS-14384:
---

Attached v2 patch, assigned newBlocks directly to locatedBlocks.

> When lastLocatedBlock token expire, it will take 1~3s second to refetch it.
> ---
>
> Key: HDFS-14384
> URL: https://issues.apache.org/jira/browse/HDFS-14384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14384.001.patch, HDFS-14384.002.patch
>
>
> Scenario :
>  1. Write file with one block which is in-progress.
>   2. Open input stream and close the output stream.
>   3. Wait for block token expiration and read the data.
>   4. Last block read take 1~3 sec to read it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14384) When lastLocatedBlock token expire, it will take 1~3s second to refetch it.

2019-10-12 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14384:
--
Attachment: HDFS-14384.002.patch

> When lastLocatedBlock token expire, it will take 1~3s second to refetch it.
> ---
>
> Key: HDFS-14384
> URL: https://issues.apache.org/jira/browse/HDFS-14384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14384.001.patch, HDFS-14384.002.patch
>
>
> Scenario :
>  1. Write file with one block which is in-progress.
>   2. Open input stream and close the output stream.
>   3. Wait for block token expiration and read the data.
>   4. Last block read take 1~3 sec to read it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2186) Fix tests using MiniOzoneCluster for its memory related exceptions

2019-10-12 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2186 started by Li Cheng.
--
> Fix tests using MiniOzoneCluster for its memory related exceptions
> --
>
> Key: HDDS-2186
> URL: https://issues.apache.org/jira/browse/HDDS-2186
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDDS-1564
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: flaky-test
> Fix For: HDDS-1564
>
>
> After multi-raft usage, MiniOzoneCluster seems to be fishy and reports a 
> bunch of 'out of memory' exceptions in ratis. Attached sample stacks.
>  
> 2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exception2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exceptionjava.lang.OutOfMemoryError: Direct buffer memory at 
> java.nio.Bits.reserveMemory(Bits.java:694) at 
> java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at 
> java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at 
> org.apache.ratis.server.raftlog.segmented.BufferedWriteChannel.(BufferedWriteChannel.java:41)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogOutputStream.(SegmentedRaftLogOutputStream.java:72)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker$StartLogSegment.execute(SegmentedRaftLogWorker.java:566)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker.run(SegmentedRaftLogWorker.java:289)
>  at java.lang.Thread.run(Thread.java:748)
>  
> which leads to:
> 2019-09-26 15:12:23,029 [RATISCREATEPIPELINE1] ERROR 
> pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for c1f4d375-683b-42fe-983b-428a63aa88032019-09-26 15:12:23,029 
> [RATISCREATEPIPELINE1] ERROR pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for 
> c1f4d375-683b-42fe-983b-428a63aa8803org.apache.ratis.protocol.TimeoutIOException:
>  deadline exceeded after 2999881264ns at 
> org.apache.ratis.grpc.GrpcUtil.tryUnwrapException(GrpcUtil.java:82) at 
> org.apache.ratis.grpc.GrpcUtil.unwrapException(GrpcUtil.java:75) at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.blockingCall(GrpcClientProtocolClient.java:178)
>  at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.groupAdd(GrpcClientProtocolClient.java:147)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:94) 
> at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:278)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.groupAdd(RaftClientImpl.java:205) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$initializePipeline$1(RatisPipelineProvider.java:142)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$null$2(RatisPipelineProvider.java:177)
>  at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) 
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at 
> java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at 
> java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at 
> java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) 
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$callRatisRpc$3(RatisPipelineProvider.java:171)
>  at 
> 

[jira] [Updated] (HDDS-2287) Move ozone source code to apache/hadoop-ozone from apache/hadoop

2019-10-12 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-2287:
--
Description: 
*This issue is created to use the assigned number for any technical commits to 
make it easy to follow the root reason of the commit...*

 

As discussed and voted on the mailing lists, Apache Hadoop Ozone source code 
will be removed from the hadoop trunk and stored in a separated repository.

 

Original discussion is here:

[https://lists.apache.org/thread.html/ef01b7def94ba58f746875999e419e10645437423ab9af19b32821e7@%3Chdfs-dev.hadoop.apache.org%3E]

(It's started as a discussion but as everybody started to vote it's finished 
with a call to a lazy consensus vote)

 

Technical proposal is shared on the wiki: 
[https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Ozone+source+tree+split]

 

Discussed on the community meeting: 
[https://cwiki.apache.org/confluence/display/HADOOP/2019-09-30+Meeting+notes]

 

Which is shared on the mailing list to get more feedback: 
[https://lists.apache.org/thread.html/ed608c708ea302675ae5e39636ed73613f47a93c2ddfbd3c9e24dbae@%3Chdfs-dev.hadoop.apache.org%3E]

 

  was:
As discussed and voted on the mailing lists, Apache Hadoop Ozone source code 
will be removed from the hadoop trunk and stored in a separated repository.

 

Original discussion is here:

[https://lists.apache.org/thread.html/ef01b7def94ba58f746875999e419e10645437423ab9af19b32821e7@%3Chdfs-dev.hadoop.apache.org%3E]

(It's started as a discussion but as everybody started to vote it's finished 
with a call to a lazy consensus vote)

 

Technical proposal is shared on the wiki: 
[https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Ozone+source+tree+split]

 

Discussed on the community meeting: 
[https://cwiki.apache.org/confluence/display/HADOOP/2019-09-30+Meeting+notes]

 

Which is shared on the mailing list to get more feedback: 
[https://lists.apache.org/thread.html/ed608c708ea302675ae5e39636ed73613f47a93c2ddfbd3c9e24dbae@%3Chdfs-dev.hadoop.apache.org%3E]

 

This issue is created to use the assigned number for any technical commits to 
make it easy to follow the root reason of the commit...


> Move ozone source code to apache/hadoop-ozone from apache/hadoop
> 
>
> Key: HDDS-2287
> URL: https://issues.apache.org/jira/browse/HDDS-2287
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>
> *This issue is created to use the assigned number for any technical commits 
> to make it easy to follow the root reason of the commit...*
>  
> As discussed and voted on the mailing lists, Apache Hadoop Ozone source code 
> will be removed from the hadoop trunk and stored in a separated repository.
>  
> Original discussion is here:
> [https://lists.apache.org/thread.html/ef01b7def94ba58f746875999e419e10645437423ab9af19b32821e7@%3Chdfs-dev.hadoop.apache.org%3E]
> (It's started as a discussion but as everybody started to vote it's finished 
> with a call to a lazy consensus vote)
>  
> Technical proposal is shared on the wiki: 
> [https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Ozone+source+tree+split]
>  
> Discussed on the community meeting: 
> [https://cwiki.apache.org/confluence/display/HADOOP/2019-09-30+Meeting+notes]
>  
> Which is shared on the mailing list to get more feedback: 
> [https://lists.apache.org/thread.html/ed608c708ea302675ae5e39636ed73613f47a93c2ddfbd3c9e24dbae@%3Chdfs-dev.hadoop.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2287) Move ozone source code to apache/hadoop-ozone from apache/hadoop

2019-10-12 Thread Marton Elek (Jira)
Marton Elek created HDDS-2287:
-

 Summary: Move ozone source code to apache/hadoop-ozone from 
apache/hadoop
 Key: HDDS-2287
 URL: https://issues.apache.org/jira/browse/HDDS-2287
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Marton Elek
Assignee: Marton Elek


As discussed and voted on the mailing lists, Apache Hadoop Ozone source code 
will be removed from the hadoop trunk and stored in a separated repository.

 

Original discussion is here:

[https://lists.apache.org/thread.html/ef01b7def94ba58f746875999e419e10645437423ab9af19b32821e7@%3Chdfs-dev.hadoop.apache.org%3E]

(It's started as a discussion but as everybody started to vote it's finished 
with a call to a lazy consensus vote)

 

Technical proposal is shared on the wiki: 
[https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Ozone+source+tree+split]

 

Discussed on the community meeting: 
[https://cwiki.apache.org/confluence/display/HADOOP/2019-09-30+Meeting+notes]

 

Which is shared on the mailing list to get more feedback: 
[https://lists.apache.org/thread.html/ed608c708ea302675ae5e39636ed73613f47a93c2ddfbd3c9e24dbae@%3Chdfs-dev.hadoop.apache.org%3E]

 

This issue is created to use the assigned number for any technical commits to 
make it easy to follow the root reason of the commit...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2186) Fix tests using MiniOzoneCluster for its memory related exceptions

2019-10-12 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949933#comment-16949933
 ] 

Li Cheng commented on HDDS-2186:


[https://github.com/apache/hadoop/pull/1431]

> Fix tests using MiniOzoneCluster for its memory related exceptions
> --
>
> Key: HDDS-2186
> URL: https://issues.apache.org/jira/browse/HDDS-2186
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDDS-1564
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: flaky-test
> Fix For: HDDS-1564
>
>
> After multi-raft usage, MiniOzoneCluster seems to be fishy and reports a 
> bunch of 'out of memory' exceptions in ratis. Attached sample stacks.
>  
> 2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exception2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exceptionjava.lang.OutOfMemoryError: Direct buffer memory at 
> java.nio.Bits.reserveMemory(Bits.java:694) at 
> java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at 
> java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at 
> org.apache.ratis.server.raftlog.segmented.BufferedWriteChannel.(BufferedWriteChannel.java:41)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogOutputStream.(SegmentedRaftLogOutputStream.java:72)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker$StartLogSegment.execute(SegmentedRaftLogWorker.java:566)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker.run(SegmentedRaftLogWorker.java:289)
>  at java.lang.Thread.run(Thread.java:748)
>  
> which leads to:
> 2019-09-26 15:12:23,029 [RATISCREATEPIPELINE1] ERROR 
> pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for c1f4d375-683b-42fe-983b-428a63aa88032019-09-26 15:12:23,029 
> [RATISCREATEPIPELINE1] ERROR pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for 
> c1f4d375-683b-42fe-983b-428a63aa8803org.apache.ratis.protocol.TimeoutIOException:
>  deadline exceeded after 2999881264ns at 
> org.apache.ratis.grpc.GrpcUtil.tryUnwrapException(GrpcUtil.java:82) at 
> org.apache.ratis.grpc.GrpcUtil.unwrapException(GrpcUtil.java:75) at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.blockingCall(GrpcClientProtocolClient.java:178)
>  at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.groupAdd(GrpcClientProtocolClient.java:147)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:94) 
> at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:278)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.groupAdd(RaftClientImpl.java:205) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$initializePipeline$1(RatisPipelineProvider.java:142)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$null$2(RatisPipelineProvider.java:177)
>  at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) 
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at 
> java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at 
> java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at 
> java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) 
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$callRatisRpc$3(RatisPipelineProvider.java:171)

[jira] [Commented] (HDDS-2186) Fix tests using MiniOzoneCluster for its memory related exceptions

2019-10-12 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949932#comment-16949932
 ] 

Li Cheng commented on HDDS-2186:


[https://github.com/apache/hadoop/pull/1431]

> Fix tests using MiniOzoneCluster for its memory related exceptions
> --
>
> Key: HDDS-2186
> URL: https://issues.apache.org/jira/browse/HDDS-2186
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDDS-1564
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: flaky-test
> Fix For: HDDS-1564
>
>
> After multi-raft usage, MiniOzoneCluster seems to be fishy and reports a 
> bunch of 'out of memory' exceptions in ratis. Attached sample stacks.
>  
> 2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exception2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exceptionjava.lang.OutOfMemoryError: Direct buffer memory at 
> java.nio.Bits.reserveMemory(Bits.java:694) at 
> java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at 
> java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at 
> org.apache.ratis.server.raftlog.segmented.BufferedWriteChannel.(BufferedWriteChannel.java:41)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogOutputStream.(SegmentedRaftLogOutputStream.java:72)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker$StartLogSegment.execute(SegmentedRaftLogWorker.java:566)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker.run(SegmentedRaftLogWorker.java:289)
>  at java.lang.Thread.run(Thread.java:748)
>  
> which leads to:
> 2019-09-26 15:12:23,029 [RATISCREATEPIPELINE1] ERROR 
> pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for c1f4d375-683b-42fe-983b-428a63aa88032019-09-26 15:12:23,029 
> [RATISCREATEPIPELINE1] ERROR pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for 
> c1f4d375-683b-42fe-983b-428a63aa8803org.apache.ratis.protocol.TimeoutIOException:
>  deadline exceeded after 2999881264ns at 
> org.apache.ratis.grpc.GrpcUtil.tryUnwrapException(GrpcUtil.java:82) at 
> org.apache.ratis.grpc.GrpcUtil.unwrapException(GrpcUtil.java:75) at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.blockingCall(GrpcClientProtocolClient.java:178)
>  at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.groupAdd(GrpcClientProtocolClient.java:147)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:94) 
> at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:278)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.groupAdd(RaftClientImpl.java:205) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$initializePipeline$1(RatisPipelineProvider.java:142)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$null$2(RatisPipelineProvider.java:177)
>  at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) 
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at 
> java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at 
> java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at 
> java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) 
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$callRatisRpc$3(RatisPipelineProvider.java:171)

[jira] [Updated] (HDDS-2186) Fix tests using MiniOzoneCluster for its memory related exceptions

2019-10-12 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-2186:
---
Fix Version/s: HDDS-1564

> Fix tests using MiniOzoneCluster for its memory related exceptions
> --
>
> Key: HDDS-2186
> URL: https://issues.apache.org/jira/browse/HDDS-2186
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: HDDS-1564
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: flaky-test
> Fix For: HDDS-1564
>
>
> After multi-raft usage, MiniOzoneCluster seems to be fishy and reports a 
> bunch of 'out of memory' exceptions in ratis. Attached sample stacks.
>  
> 2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exception2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exceptionjava.lang.OutOfMemoryError: Direct buffer memory at 
> java.nio.Bits.reserveMemory(Bits.java:694) at 
> java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at 
> java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at 
> org.apache.ratis.server.raftlog.segmented.BufferedWriteChannel.(BufferedWriteChannel.java:41)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogOutputStream.(SegmentedRaftLogOutputStream.java:72)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker$StartLogSegment.execute(SegmentedRaftLogWorker.java:566)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker.run(SegmentedRaftLogWorker.java:289)
>  at java.lang.Thread.run(Thread.java:748)
>  
> which leads to:
> 2019-09-26 15:12:23,029 [RATISCREATEPIPELINE1] ERROR 
> pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for c1f4d375-683b-42fe-983b-428a63aa88032019-09-26 15:12:23,029 
> [RATISCREATEPIPELINE1] ERROR pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for 
> c1f4d375-683b-42fe-983b-428a63aa8803org.apache.ratis.protocol.TimeoutIOException:
>  deadline exceeded after 2999881264ns at 
> org.apache.ratis.grpc.GrpcUtil.tryUnwrapException(GrpcUtil.java:82) at 
> org.apache.ratis.grpc.GrpcUtil.unwrapException(GrpcUtil.java:75) at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.blockingCall(GrpcClientProtocolClient.java:178)
>  at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.groupAdd(GrpcClientProtocolClient.java:147)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:94) 
> at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:278)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.groupAdd(RaftClientImpl.java:205) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$initializePipeline$1(RatisPipelineProvider.java:142)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$null$2(RatisPipelineProvider.java:177)
>  at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) 
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at 
> java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at 
> java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at 
> java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) 
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$callRatisRpc$3(RatisPipelineProvider.java:171)
>  at 
> 

[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-10-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949931#comment-16949931
 ] 

Hadoop QA commented on HDFS-14739:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 25s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14739 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982828/HDFS-14739-trunk-010.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a21864aeb3f3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c561a70 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28072/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28072/testReport/ |
| Max. process+thread count | 2463 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28072/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Updated] (HDDS-2186) Fix tests using MiniOzoneCluster for its memory related exceptions

2019-10-12 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-2186:
---
Component/s: test

> Fix tests using MiniOzoneCluster for its memory related exceptions
> --
>
> Key: HDDS-2186
> URL: https://issues.apache.org/jira/browse/HDDS-2186
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDDS-1564
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: flaky-test
> Fix For: HDDS-1564
>
>
> After multi-raft usage, MiniOzoneCluster seems to be fishy and reports a 
> bunch of 'out of memory' exceptions in ratis. Attached sample stacks.
>  
> 2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exception2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exceptionjava.lang.OutOfMemoryError: Direct buffer memory at 
> java.nio.Bits.reserveMemory(Bits.java:694) at 
> java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at 
> java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at 
> org.apache.ratis.server.raftlog.segmented.BufferedWriteChannel.(BufferedWriteChannel.java:41)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogOutputStream.(SegmentedRaftLogOutputStream.java:72)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker$StartLogSegment.execute(SegmentedRaftLogWorker.java:566)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker.run(SegmentedRaftLogWorker.java:289)
>  at java.lang.Thread.run(Thread.java:748)
>  
> which leads to:
> 2019-09-26 15:12:23,029 [RATISCREATEPIPELINE1] ERROR 
> pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for c1f4d375-683b-42fe-983b-428a63aa88032019-09-26 15:12:23,029 
> [RATISCREATEPIPELINE1] ERROR pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for 
> c1f4d375-683b-42fe-983b-428a63aa8803org.apache.ratis.protocol.TimeoutIOException:
>  deadline exceeded after 2999881264ns at 
> org.apache.ratis.grpc.GrpcUtil.tryUnwrapException(GrpcUtil.java:82) at 
> org.apache.ratis.grpc.GrpcUtil.unwrapException(GrpcUtil.java:75) at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.blockingCall(GrpcClientProtocolClient.java:178)
>  at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.groupAdd(GrpcClientProtocolClient.java:147)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:94) 
> at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:278)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.groupAdd(RaftClientImpl.java:205) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$initializePipeline$1(RatisPipelineProvider.java:142)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$null$2(RatisPipelineProvider.java:177)
>  at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) 
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at 
> java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at 
> java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at 
> java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) 
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$callRatisRpc$3(RatisPipelineProvider.java:171)
>  at 
> 

[jira] [Commented] (HDDS-2186) Fix tests using MiniOzoneCluster for its memory related exceptions

2019-10-12 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949930#comment-16949930
 ] 

Li Cheng commented on HDDS-2186:


It turns out the miniOzoneCluster running out of memory is triggered by endless 
creation of pipeline. Add logic to restrict endless pipeline creation in 
[https://github.com/apache/hadoop/pull/1431]. 

> Fix tests using MiniOzoneCluster for its memory related exceptions
> --
>
> Key: HDDS-2186
> URL: https://issues.apache.org/jira/browse/HDDS-2186
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: HDDS-1564
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: flaky-test
>
> After multi-raft usage, MiniOzoneCluster seems to be fishy and reports a 
> bunch of 'out of memory' exceptions in ratis. Attached sample stacks.
>  
> 2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exception2019-09-26 15:12:22,824 
> [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker]
>  ERROR segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:run(323)) - 
> 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker
>  hit exceptionjava.lang.OutOfMemoryError: Direct buffer memory at 
> java.nio.Bits.reserveMemory(Bits.java:694) at 
> java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at 
> java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at 
> org.apache.ratis.server.raftlog.segmented.BufferedWriteChannel.(BufferedWriteChannel.java:41)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogOutputStream.(SegmentedRaftLogOutputStream.java:72)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker$StartLogSegment.execute(SegmentedRaftLogWorker.java:566)
>  at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker.run(SegmentedRaftLogWorker.java:289)
>  at java.lang.Thread.run(Thread.java:748)
>  
> which leads to:
> 2019-09-26 15:12:23,029 [RATISCREATEPIPELINE1] ERROR 
> pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for c1f4d375-683b-42fe-983b-428a63aa88032019-09-26 15:12:23,029 
> [RATISCREATEPIPELINE1] ERROR pipeline.RatisPipelineProvider 
> (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990
>  for 
> c1f4d375-683b-42fe-983b-428a63aa8803org.apache.ratis.protocol.TimeoutIOException:
>  deadline exceeded after 2999881264ns at 
> org.apache.ratis.grpc.GrpcUtil.tryUnwrapException(GrpcUtil.java:82) at 
> org.apache.ratis.grpc.GrpcUtil.unwrapException(GrpcUtil.java:75) at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.blockingCall(GrpcClientProtocolClient.java:178)
>  at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient.groupAdd(GrpcClientProtocolClient.java:147)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:94) 
> at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:278)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.groupAdd(RaftClientImpl.java:205) 
> at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$initializePipeline$1(RatisPipelineProvider.java:142)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$null$2(RatisPipelineProvider.java:177)
>  at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) 
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at 
> java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at 
> java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at 
> java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) 
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>  at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) 
> at 
> 

[jira] [Updated] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-10-12 Thread Xudong Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Attachment: HDFS-14646.001.patch

> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> 
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: multi-sbnn
> Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different 
> consequences : 
> *1.Under Hadoop 2.7.2 (with Jetty 6.1.26)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will still be established, and the data SNN sent will be read by 
> Jetty framework itself in the peer NN side, so the SNN will insignificantly 
> send the FsImage to the peer NN continuously, causing a waste of time and 
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
> reach about 30GB, This is indeed a big waste.
> *2.Under newest release-3.2.0-RC1 (with Jetty 9.3.24) and trunk (with Jetty 
> 9.3.27)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will be auto closed, and then SNN will directly get an "Error 
> writing request body to server" exception, as below, note this test needs a 
> relatively big FSImage (e.g. 10MB level):
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>   {code}
>                   
> *Solution:*
>  A standby NameNode should not upload fsimage to an inappropriate NameNode, 
> when he plans to put a FsImage to the peer NN, he need to check whether he 
> really need to put it at this time.
> In detail, local SNN should establish an HTTP connection with the peer NN, 
> send the put request, and then immediately read the response (this is the key 
> point). If the peer NN does not reply an HTTP_OK, it means the local SNN 
> should not put 

[jira] [Updated] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-10-12 Thread Xudong Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Attachment: (was: HDFS-14646.001.patch)

> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> 
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: multi-sbnn
> Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different 
> consequences : 
> *1.Under Hadoop 2.7.2 (with Jetty 6.1.26)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will still be established, and the data SNN sent will be read by 
> Jetty framework itself in the peer NN side, so the SNN will insignificantly 
> send the FsImage to the peer NN continuously, causing a waste of time and 
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
> reach about 30GB, This is indeed a big waste.
> *2.Under newest release-3.2.0-RC1 (with Jetty 9.3.24) and trunk (with Jetty 
> 9.3.27)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will be auto closed, and then SNN will directly get an "Error 
> writing request body to server" exception, as below, note this test needs a 
> relatively big FSImage (e.g. 10MB level):
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>   {code}
>                   
> *Solution:*
>  A standby NameNode should not upload fsimage to an inappropriate NameNode, 
> when he plans to put a FsImage to the peer NN, he need to check whether he 
> really need to put it at this time.
> In detail, local SNN should establish an HTTP connection with the peer NN, 
> send the put request, and then immediately read the response (this is the key 
> point). If the peer NN does not reply an HTTP_OK, it means the local SNN 
> should 

[jira] [Assigned] (HDDS-2219) Move all the ozone dist scripts/configs to one location

2019-10-12 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2219:
--

Assignee: YiSheng Lien

> Move all the ozone dist scripts/configs to one location
> ---
>
> Key: HDDS-2219
> URL: https://issues.apache.org/jira/browse/HDDS-2219
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: build
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbe
>
> The hadoop distribution tar file contains jar files scripts and default 
> configuration files.
> The scripts and configuration files are stored in multiple separated projects 
> without any reason:
> {code:java}
> ls hadoop-hdds/common/src/main/bin/
> hadoop-config.cmd  hadoop-config.sh  hadoop-daemons.sh  hadoop-functions.sh  
> workers.sh
> ls hadoop-ozone/common/src/main/bin 
> ozone  ozone-config.sh  start-ozone.sh  stop-ozone.sh
> ls hadoop-ozone/common/src/main/shellprofile.d 
> hadoop-ozone.sh
> ls hadoop-ozone/dist/src/main/conf 
> dn-audit-log4j2.properties  log4j.properties  om-audit-log4j2.properties  
> ozone-shell-log4j.properties  ozone-site.xml  scm-audit-log4j2.properties
>  {code}
> All of these scripts can be moved to the hadoop-ozone/dist/src/shell
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching also should be 
> updated to copy all of them to the right place in the tar.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14905) Backport HDFS persistent memory read cache support to branch-3.2

2019-10-12 Thread Feilong He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-14905:
--
Attachment: HDFS-14905-branch-3.2-000.patch
Status: Patch Available  (was: Open)

> Backport HDFS persistent memory read cache support to branch-3.2
> 
>
> Key: HDFS-14905
> URL: https://issues.apache.org/jira/browse/HDFS-14905
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14905-branch-3.2-000.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-10-12 Thread Xudong Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Status: Open  (was: Patch Available)

> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> 
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: multi-sbnn
> Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different 
> consequences : 
> *1.Under Hadoop 2.7.2 (with Jetty 6.1.26)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will still be established, and the data SNN sent will be read by 
> Jetty framework itself in the peer NN side, so the SNN will insignificantly 
> send the FsImage to the peer NN continuously, causing a waste of time and 
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
> reach about 30GB, This is indeed a big waste.
> *2.Under newest release-3.2.0-RC1 (with Jetty 9.3.24) and trunk (with Jetty 
> 9.3.27)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will be auto closed, and then SNN will directly get an "Error 
> writing request body to server" exception, as below, note this test needs a 
> relatively big FSImage (e.g. 10MB level):
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>   {code}
>                   
> *Solution:*
>  A standby NameNode should not upload fsimage to an inappropriate NameNode, 
> when he plans to put a FsImage to the peer NN, he need to check whether he 
> really need to put it at this time.
> In detail, local SNN should establish an HTTP connection with the peer NN, 
> send the put request, and then immediately read the response (this is the key 
> point). If the peer NN does not reply an HTTP_OK, it means the local SNN 
> should not put 

[jira] [Updated] (HDFS-14646) Standby NameNode should not upload fsimage to an inappropriate NameNode.

2019-10-12 Thread Xudong Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Status: Patch Available  (was: Open)

> Standby NameNode should not upload fsimage to an inappropriate NameNode.
> 
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: multi-sbnn
> Attachments: HDFS-14646.000.patch, HDFS-14646.001.patch
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> Depending on the version of Jetty, this behavior can lead to different 
> consequences : 
> *1.Under Hadoop 2.7.2 (with Jetty 6.1.26)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will still be established, and the data SNN sent will be read by 
> Jetty framework itself in the peer NN side, so the SNN will insignificantly 
> send the FsImage to the peer NN continuously, causing a waste of time and 
> bandwidth. In a relatively large HDFS cluster, the size of FsImage can often 
> reach about 30GB, This is indeed a big waste.
> *2.Under newest release-3.2.0-RC1 (with Jetty 9.3.24) and trunk (with Jetty 
> 9.3.27)*
>  After peer NN called HttpServletResponse.sendError(), the underlying TCP 
> connection will be auto closed, and then SNN will directly get an "Error 
> writing request body to server" exception, as below, note this test needs a 
> relatively big FSImage (e.g. 10MB level):
> {code:java}
> 2019-08-17 03:59:25,413 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 524288 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImage(TransferFsImage.java:314)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:277)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$1.call(StandbyCheckpointer.java:272)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2019-08-17 03:59:25,422 INFO namenode.TransferFsImage: Sending fileName: 
> /tmp/hadoop-root/dfs/name/current/fsimage_3364240, fileSize: 
> 9864721. Sent total: 851968 bytes. Size of last segment intended to send: 
> 4096 bytes.
>  java.io.IOException: Error writing request body to server
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3587)
>  at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3570)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.copyFileToStream(TransferFsImage.java:396)
>  at 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.writeFileToPutRequest(TransferFsImage.java:340)
>   {code}
>                   
> *Solution:*
>  A standby NameNode should not upload fsimage to an inappropriate NameNode, 
> when he plans to put a FsImage to the peer NN, he need to check whether he 
> really need to put it at this time.
> In detail, local SNN should establish an HTTP connection with the peer NN, 
> send the put request, and then immediately read the response (this is the key 
> point). If the peer NN does not reply an HTTP_OK, it means the local SNN 
> should not put 

[jira] [Created] (HDFS-14905) Backport HDFS persistent memory read cache support to branch-3.2

2019-10-12 Thread Feilong He (Jira)
Feilong He created HDFS-14905:
-

 Summary: Backport HDFS persistent memory read cache support to 
branch-3.2
 Key: HDFS-14905
 URL: https://issues.apache.org/jira/browse/HDFS-14905
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: caching, datanode
Reporter: Feilong He
Assignee: Feilong He
 Fix For: 3.3.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2276) Allow users to pass hostnames or IP when decommissioning nodes

2019-10-12 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2276:
--

Assignee: YiSheng Lien

> Allow users to pass hostnames or IP when decommissioning nodes
> --
>
> Key: HDDS-2276
> URL: https://issues.apache.org/jira/browse/HDDS-2276
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Stephen O'Donnell
>Assignee: YiSheng Lien
>Priority: Major
>
> In the initial implementation, the user must pass a hostname or the IP when 
> decommissioning a host, depending on the setting:
> dfs.datanode.use.datanode.hostname
> It would be better if the user can pass either host or IP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-10-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949923#comment-16949923
 ] 

Ayush Saxena commented on HDFS-14739:
-

{quote}can we just make the whole locations part to go within the try/catch so 
we don't need to have a separate definition of List locations?
{quote}
I think this is not addressed.

IIUC this meant something like this :
{code:java}
  private List> getListingInt(
  String src, byte[] startAfter, boolean needLocation) throws IOException {
try {
  List locations =
  rpcServer.getLocationsForPath(src, false, false);
  // Locate the dir and fetch the listing.
  RemoteMethod method = new RemoteMethod("getListing",
  new Class[] { String.class, startAfter.getClass(), boolean.class },
  new RemoteParam(), startAfter, needLocation);
  List> listings =
  rpcClient.invokeConcurrent(locations, method, false, -1,
  DirectoryListing.class);
  return listings;
} catch (RouterResolveException e) {
  LOG.debug("Cannot get locations for {}, {}.", src, e.getMessage());
  return new ArrayList<>();
}
  }
{code}
[~elgoiri] can you confirm once?

{{TestRouterWebHDFSContractSeek}} seems unrelated, looks like broken by 
HADOOP-15870. I need to check, we need to track this too separately.

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14739-trunk-001.patch, HDFS-14739-trunk-002.patch, 
> HDFS-14739-trunk-003.patch, HDFS-14739-trunk-004.patch, 
> HDFS-14739-trunk-005.patch, HDFS-14739-trunk-006.patch, 
> HDFS-14739-trunk-007.patch, HDFS-14739-trunk-008.patch, 
> HDFS-14739-trunk-009.patch, HDFS-14739-trunk-010.patch, 
> image-2019-08-16-17-15-50-614.png, image-2019-08-16-17-16-00-863.png, 
> image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=327219=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-327219
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 12/Oct/19 06:09
Start Date: 12/Oct/19 06:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1559: HDDS-1737. Add 
Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#issuecomment-541289964
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 40 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 44 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 962 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1068 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 39 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 64 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 828 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 33 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 30 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2607 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1559 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c11d68035896 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c561a70 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1559/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile |