[jira] [Assigned] (HDDS-1571) Create an interface for pipeline placement policy to support network topologies

2019-09-02 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng reassigned HDDS-1571:
--

Assignee: Li Cheng  (was: Sammi Chen)

> Create an interface for pipeline placement policy to support network 
> topologies
> ---
>
> Key: HDDS-1571
> URL: https://issues.apache.org/jira/browse/HDDS-1571
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>
> Leverage the work done in HDDS-700 for pipeline creation for open containers.
> Create an interface that can provide different policy implementations for 
> pipeline creation. The default implementation should take into account no 
> topology information is configured.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2076) Read fails because the block cannot be located in the container

2019-09-02 Thread Mukul Kumar Singh (Jira)
Mukul Kumar Singh created HDDS-2076:
---

 Summary: Read fails because the block cannot be located in the 
container
 Key: HDDS-2076
 URL: https://issues.apache.org/jira/browse/HDDS-2076
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client, Ozone Datanode
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
 Attachments: log.zip

Read fails as the client is not able to read the block from the container.

{code}
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Unable to find the block with bcsID 2515 .Container 7 bcsId is 0.
at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:536)
at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.lambd2a0$getValid1a9to-08-30
 12:51:20,081 | INFO  | SCMAudit | user=msingh | ip=192.168.0.r103 
|List$0(ContainerP
rotocolCalls.java:569)
{code}


The client eventually exits here
{code}
2019-08-30 12:51:20,081 [pool-224-thread-6] ERROR ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:readData(176)) - LOADGEN: Read 
key:pool-224-thread-6_330651 failed with ex
ception
ERROR ozone.MiniOzoneLoadGenerator (MiniOzoneLoadGenerator.java:load(121)) - 
LOADGEN: Exiting due to exception
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14630) Configuration.getTimeDurationHelper() should not log time unit warning in info log.

2019-09-02 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921192#comment-16921192
 ] 

Surendra Singh Lilhore commented on HDFS-14630:
---

+1 LGTM

> Configuration.getTimeDurationHelper() should not log time unit warning in 
> info log.
> ---
>
> Key: HDFS-14630
> URL: https://issues.apache.org/jira/browse/HDFS-14630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: HDFS-14630.001.patch, HDFS-14630.patch
>
>
> To solve [HDFS-12920|https://issues.apache.org/jira/browse/HDFS-12920] issue 
> we configured "dfs.client.datanode-restart.timeout" without time unit. No log 
> file is full of
> {noformat}
> 2019-06-22 20:13:14,605 | INFO  | pool-12-thread-1 | No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS 
> org.apache.hadoop.conf.Configuration.logDeprecation(Configuration.java:1409){noformat}
> No need to log this, just give the behavior in property description.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14807) SetTimes updates all negative values apart from -1

2019-09-02 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921184#comment-16921184
 ] 

Vinayakumar B commented on HDFS-14807:
--

+1

> SetTimes updates all negative values apart from -1
> --
>
> Key: HDFS-14807
> URL: https://issues.apache.org/jira/browse/HDFS-14807
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14807-01.patch, HDFS-14807-02.patch
>
>
> Set Times API, updates negative time on all negative values apart from -1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-02 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921182#comment-16921182
 ] 

hemanthboyina commented on HDFS-14762:
--

updated the patch , no test failures now

> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14762.001.patch, HDFS-14762.002.patch, 
> HDFS-14762.003.patch
>
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921179#comment-16921179
 ] 

Hadoop QA commented on HDFS-14762:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14762 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979147/HDFS-14762.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5705af42c75e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 915cbc9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27765/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27765/testReport/ |
| Max. process+thread count | 1656 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27765/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was

[jira] [Commented] (HDFS-12212) Options.Rename.To_TRASH is considered even when Options.Rename.NONE is specified

2019-09-02 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921176#comment-16921176
 ] 

Vinayakumar B commented on HDFS-12212:
--

thanks [~ayushtkn] for review and commit.

[~hanishakoneru]  and [~jojochuang] for reviews.

> Options.Rename.To_TRASH is considered even when Options.Rename.NONE is 
> specified
> 
>
> Key: HDFS-12212
> URL: https://issues.apache.org/jira/browse/HDFS-12212
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha1, 2.8.2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-12212-01.patch
>
>
> HDFS-8312 introduced {{Options.Rename.TO_TRASH}} to differentiate the 
> movement to trash and other renames for permission checks.
> When Options.Rename.NONE is passed also TO_TRASH is considered for rename and 
> wrong permissions are checked for rename.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921173#comment-16921173
 ] 

Hadoop QA commented on HDFS-14350:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
49s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-582/10/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/582 |
| JIRA Issue | HDFS-14350 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux b4895b1fe5a4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |

[jira] [Assigned] (HDDS-1493) Download and Import Container replicator fails.

2019-09-02 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1493:
---

Assignee: Nanda kumar  (was: Hrishikesh Gadre)

> Download and Import Container replicator fails.
> ---
>
> Key: HDDS-1493
> URL: https://issues.apache.org/jira/browse/HDDS-1493
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Nanda kumar
>Priority: Blocker
> Attachments: ozone.log
>
>
> While running batch jobs (16 threads writing a lot of 10MB+ files), the 
> following error is seen in the SCM logs.
> {code}
> ERROR  - Can't import the downloaded container data id=317
> {code}
> It is unclear from the logs why this happens. Needs more investigation to 
> find the root cause.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1899) DeleteBlocksCommandHandler is unable to find the container in SCM

2019-09-02 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1899:
---

Assignee: Nanda kumar  (was: Mukul Kumar Singh)

> DeleteBlocksCommandHandler is unable to find the container in SCM
> -
>
> Key: HDDS-1899
> URL: https://issues.apache.org/jira/browse/HDDS-1899
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> DeleteBlocksCommandHandler is unable to find a container in SCM.
> {code}
> 2019-08-02 14:04:56,735 WARN  commandhandler.DeleteBlocksCommandHandler 
> (DeleteBlocksCommandHandler.java:lambda$handle$0(140)) - Failed to delete 
> blocks for container=33, TXID=184
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the container 33
> at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.DeleteBlocksCommandHandler.lambda$handle$0(DeleteBlocksCommandHandler.java:122)
> at java.util.ArrayList.forEach(ArrayList.java:1257)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.DeleteBlocksCommandHandler.handle(DeleteBlocksCommandHandler.java:114)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:432)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1899) DeleteBlocksCommandHandler is unable to find the container in SCM

2019-09-02 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1899:
---

Assignee: Mukul Kumar Singh

> DeleteBlocksCommandHandler is unable to find the container in SCM
> -
>
> Key: HDDS-1899
> URL: https://issues.apache.org/jira/browse/HDDS-1899
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> DeleteBlocksCommandHandler is unable to find a container in SCM.
> {code}
> 2019-08-02 14:04:56,735 WARN  commandhandler.DeleteBlocksCommandHandler 
> (DeleteBlocksCommandHandler.java:lambda$handle$0(140)) - Failed to delete 
> blocks for container=33, TXID=184
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the container 33
> at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.DeleteBlocksCommandHandler.lambda$handle$0(DeleteBlocksCommandHandler.java:122)
> at java.util.ArrayList.forEach(ArrayList.java:1257)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.DeleteBlocksCommandHandler.handle(DeleteBlocksCommandHandler.java:114)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:432)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14318) dn cannot be recognized and must be restarted to recognize the Repaired disk

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921156#comment-16921156
 ] 

Hadoop QA commented on HDFS-14318:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
54s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 155 unchanged - 0 fixed = 156 total (was 155) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Possible doublecheck on 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskThread in 
org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()  At 
DataNode.java:org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()
  At DataNode.java:[lines 2211-2213] |
|  |  Null pointer dereference of DataNode.errorDisk in 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError()  Dereferenced 
at DataNode.java:in 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError()  Dereferenced 
at DataNode.java:[line 3486] |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHot

[jira] [Commented] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921153#comment-16921153
 ] 

Hadoop QA commented on HDFS-14564:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} branch/hadoop-hdfs-project/hadoop-hdfs-native-client 
no findbugs output file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 18m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
47s{color} | {color:green} root: The patch generated 0 new + 110 unchanged - 1 
fixed = 110 total (was 111) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 0 
unchanged - 38 fixed = 0 total (was 38) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} hadoop-hdfs-project/hadoop-hdfs-native-client has no 
data from findbugs {color} |
|| || || || {color:brown} Other 

[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=305389&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305389
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 03/Sep/19 03:13
Start Date: 03/Sep/19 03:13
Worklog Time Spent: 10m 
  Work Description: timmylicheng commented on pull request #1366: 
HDDS-1577. Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305389)
Time Spent: 1.5h  (was: 1h 20m)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=305390&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305390
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 03/Sep/19 03:13
Start Date: 03/Sep/19 03:13
Worklog Time Spent: 10m 
  Work Description: timmylicheng commented on pull request #1366: 
HDDS-1577. Add default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366
 
 
   #HDDS-1577
   Add pipeline placement policy
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305390)
Time Spent: 1h 40m  (was: 1.5h)

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13736) BlockPlacementPolicyDefault can not choose favored nodes when 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false

2019-09-02 Thread hu xiaodong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong reassigned HDFS-13736:
--

Assignee: (was: hu xiaodong)

> BlockPlacementPolicyDefault can not choose favored nodes when 
> 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false
> --
>
> Key: HDFS-13736
> URL: https://issues.apache.org/jira/browse/HDFS-13736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hu xiaodong
>Priority: Major
> Attachments: HDFS-13736.001.patch
>
>
> BlockPlacementPolicyDefault can not choose favored nodes when 
> 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false. 
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13736) BlockPlacementPolicyDefault can not choose favored nodes when 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false

2019-09-02 Thread hu xiaodong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hu xiaodong reassigned HDFS-13736:
--

Assignee: hu xiaodong

> BlockPlacementPolicyDefault can not choose favored nodes when 
> 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false
> --
>
> Key: HDFS-13736
> URL: https://issues.apache.org/jira/browse/HDFS-13736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Major
> Attachments: HDFS-13736.001.patch
>
>
> BlockPlacementPolicyDefault can not choose favored nodes when 
> 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false. 
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14478) Add libhdfs APIs for openFile

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921129#comment-16921129
 ] 

Hadoop QA commented on HDFS-14478:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 37s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_ops_hdfs_static |
|   | test_test_libhdfs_threaded_hdfs_static |
|   | test_test_libhdfs_zerocopy_hdfs_static |
|   | test_test_native_mini_dfs |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
|   | test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static |
|   | libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | test_libhdfs_mini_stress_hdfspp_test_shim_static |
|   | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/13/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/955 |
| JIRA Issue | HDFS-14478 |
| Optional Tests | dupname asflicense compile cc mvnsite javac unit |
| uname | Linux 92493e46c7c5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 915cbc9 |
| Default Java | 1.8.0_222 |
| CTEST | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/13/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/13/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-955/13/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
|

[jira] [Work logged] (HDDS-1879) Support multiple excluded scopes when choosing datanodes in NetworkTopology

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1879?focusedWorklogId=305381&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305381
 ]

ASF GitHub Bot logged work on HDDS-1879:


Author: ASF GitHub Bot
Created on: 03/Sep/19 02:52
Start Date: 03/Sep/19 02:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1194: HDDS-1879.  
Support multiple excluded scopes when choosing datanodes in NetworkTopology
URL: https://github.com/apache/hadoop/pull/1194#issuecomment-527284109
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for branch |
   | +1 | mvninstall | 604 | trunk passed |
   | +1 | compile | 400 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 866 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 189 | trunk passed |
   | 0 | spotbugs | 483 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 694 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 573 | the patch passed |
   | +1 | compile | 397 | the patch passed |
   | +1 | javac | 397 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 695 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | the patch passed |
   | +1 | findbugs | 671 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1819 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7841 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1194 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e8d26a2c59c0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/13/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/13/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/13/testReport/ |
   | Max. process+thread count | 5264 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305381)
Time Spent: 2.5h  (was: 2h 20m)

> Support multiple excluded scopes when choosing datanodes in NetworkTopology
> ---
>
> Key: HDDS-1879
> URL: https://issues.apache.org/jira/browse/HDDS-1879
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  

[jira] [Work logged] (HDDS-1810) SCM command to Activate and Deactivate pipelines

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1810?focusedWorklogId=305380&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305380
 ]

ASF GitHub Bot logged work on HDDS-1810:


Author: ASF GitHub Bot
Created on: 03/Sep/19 02:49
Start Date: 03/Sep/19 02:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1224: HDDS-1810. SCM 
command to Activate and Deactivate pipelines.
URL: https://github.com/apache/hadoop/pull/1224#issuecomment-527283700
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | +1 | mvninstall | 663 | trunk passed |
   | +1 | compile | 434 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1023 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 212 | trunk passed |
   | 0 | spotbugs | 503 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 730 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 635 | the patch passed |
   | +1 | compile | 448 | the patch passed |
   | +1 | cc | 448 | the patch passed |
   | +1 | javac | 448 | the patch passed |
   | +1 | checkstyle | 88 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 822 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | the patch passed |
   | +1 | findbugs | 667 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 305 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1705 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 8367 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1224 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux fa4556860f0c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/9/testReport/ |
   | Max. process+thread count | 5324 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-hdds/tools hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1224/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305380)
Time Spent: 3h  (was: 2h 50m)

> SCM command to Activate and Deactivate pipelines
> 
>
> Key: HDDS-1810
> URL: https://issues.apache.org/jira/browse/HDDS-1810
> Project: Hadoop Distributed Data Store
>  Issue Type: 

[jira] [Work logged] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1909?focusedWorklogId=305377&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305377
 ]

ASF GitHub Bot logged work on HDDS-1909:


Author: ASF GitHub Bot
Created on: 03/Sep/19 02:41
Start Date: 03/Sep/19 02:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1225: HDDS-1909. Use 
new HA code for Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-527282139
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 595 | trunk passed |
   | +1 | compile | 405 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 440 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 643 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for patch |
   | +1 | mvninstall | 549 | the patch passed |
   | +1 | compile | 386 | the patch passed |
   | +1 | javac | 386 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 740 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 204 | the patch passed |
   | +1 | findbugs | 726 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 298 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1792 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 7992 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/26/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux 78dae72e3ceb 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/26/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/26/testReport/ |
   | Max. process+thread count | 5295 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/26/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305377)
Time Spent: 16h  (was: 15h 50m)

> Use new HA code for Non-HA in OM
> 
>
> Key: HDDS-1909
> URL: https://issues.apache.org/jira/browse/HDDS-1909
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Vi

[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=305369&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305369
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 03/Sep/19 02:06
Start Date: 03/Sep/19 02:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-527276378
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 614 | trunk passed |
   | +1 | compile | 388 | trunk passed |
   | +1 | checkstyle | 99 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 922 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 431 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 639 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 423 | the patch passed |
   | +1 | cc | 423 | the patch passed |
   | +1 | javac | 423 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 778 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | the patch passed |
   | +1 | findbugs | 770 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 310 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2359 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 8614 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 371835c48863 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/10/testReport/ |
   | Max. process+thread count | 4430 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/s3gateway hadoop-ozone/dist U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305369)
Time Spent: 3h 40m  (was: 3.5h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>

[jira] [Commented] (HDFS-14801) PrometheusMetricsSink: Better support for NNTop

2019-09-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921106#comment-16921106
 ] 

Akira Ajisaka commented on HDFS-14801:
--

Attached a screenshot.
After the patch, we can query NNTopMetrics by windowMs, op, or user.

This change is only in PrometheusMetricsSink. PrometheusMetricsSink is not yet 
released, so now we can break compatibility.

TODO:
Old metrics are still exposed via '/prom' endpoint. We have to clear metrics if 
there are no updates within the windowMs.

> PrometheusMetricsSink: Better support for NNTop
> ---
>
> Key: HDFS-14801
> URL: https://issues.apache.org/jira/browse/HDFS-14801
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: Screen Shot 2019-09-03 at 10.28.46.png
>
>
> Now nntop metrics is flattened as 
> dfs.NNTopUserOpCounts.windowMs=.op=.user=.count.
> I'd like to make windowMs, op, and user as label instead of name for more 
> prometheus-friendly metrics.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14801) PrometheusMetricsSink: Better support for NNTop

2019-09-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14801:
-
Attachment: Screen Shot 2019-09-03 at 10.28.46.png

> PrometheusMetricsSink: Better support for NNTop
> ---
>
> Key: HDFS-14801
> URL: https://issues.apache.org/jira/browse/HDFS-14801
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: Screen Shot 2019-09-03 at 10.28.46.png
>
>
> Now nntop metrics is flattened as 
> dfs.NNTopUserOpCounts.windowMs=.op=.user=.count.
> I'd like to make windowMs, op, and user as label instead of name for more 
> prometheus-friendly metrics.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2022) Add additional freon tests

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2022?focusedWorklogId=305362&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305362
 ]

ASF GitHub Bot logged work on HDDS-2022:


Author: ASF GitHub Bot
Created on: 03/Sep/19 01:01
Start Date: 03/Sep/19 01:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#issuecomment-527266540
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 637 | trunk passed |
   | +1 | compile | 421 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 969 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | trunk passed |
   | 0 | spotbugs | 474 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 701 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 589 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | +1 | checkstyle | 100 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | the patch passed |
   | -1 | findbugs | 452 | hadoop-ozone generated 6 new + 0 unchanged - 0 fixed 
= 6 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 267 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1620 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7785 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Possible null pointer dereference of volume in 
org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration,
 String, String) on exception path  Dereferenced at 
BaseFreonGenerator.java:volume in 
org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration,
 String, String) on exception path  Dereferenced at 
BaseFreonGenerator.java:[line 263] |
   |  |  Unused field:OzoneClientKeyValidator.java |
   |  |  Unused field:OzoneClientKeyValidator.java |
   |  |  Dead store to configuration in 
org.apache.hadoop.ozone.freon.S3KeyGenerator.call()  At 
S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call()  At 
S3KeyGenerator.java:[line 78] |
   |  |  Unused field:SameKeyReader.java |
   |  |  Unused field:SameKeyReader.java |
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1341 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux fbcbc6db13ff 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/9/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/9/testReport/ |
   | Max. process+thread count | 4967 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 
---

[jira] [Updated] (HDFS-14802) The feature of protect directories should be used in RenameOp

2019-09-02 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14802:
---
Status: Open  (was: Patch Available)

> The feature of protect directories should be used in RenameOp
> -
>
> Key: HDFS-14802
> URL: https://issues.apache.org/jira/browse/HDFS-14802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14802.001.patch, HDFS-14802.002.patch, 
> HDFS-14802.003.patch
>
>
> Now we could set fs.protected.directories to prevent users from deleting 
> important directories. But users can delete directories around the limitation.
> 1. Rename the directories and delete them.
> 2. move the directories to trash and namenode will delete them.
> So I think we should use the feature of protected directories in RenameOp



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14802) The feature of protect directories should be used in RenameOp

2019-09-02 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14802:
---
Status: Patch Available  (was: Open)

> The feature of protect directories should be used in RenameOp
> -
>
> Key: HDFS-14802
> URL: https://issues.apache.org/jira/browse/HDFS-14802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14802.001.patch, HDFS-14802.002.patch, 
> HDFS-14802.003.patch
>
>
> Now we could set fs.protected.directories to prevent users from deleting 
> important directories. But users can delete directories around the limitation.
> 1. Rename the directories and delete them.
> 2. move the directories to trash and namenode will delete them.
> So I think we should use the feature of protected directories in RenameOp



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2021) Upgrade Guava library to v26 in hdds project

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2021?focusedWorklogId=305361&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305361
 ]

ASF GitHub Bot logged work on HDDS-2021:


Author: ASF GitHub Bot
Created on: 03/Sep/19 00:40
Start Date: 03/Sep/19 00:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1340: HDDS-2021. 
Upgrade Guava library to v26 in hdds project
URL: https://github.com/apache/hadoop/pull/1340#issuecomment-527264020
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 714 | trunk passed |
   | +1 | compile | 406 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 2044 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 63 | hadoop-hdds in the patch failed. |
   | -1 | compile | 83 | hadoop-hdds in the patch failed. |
   | -1 | javac | 83 | hadoop-hdds in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 672 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 199 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 145 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1900 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6357 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1340/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1340 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 772874d84e0d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1340/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1340/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1340/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1340/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1340/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1340/2/testReport/ |
   | Max. process+thread count | 5324 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1340/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305361)
Time Spent: 1h 20m  (was: 1h 10m)

> Upgrade Guava library to v26 in hdds project
> 
>
> Key: HDDS-2021
> URL: https://issues.apache.org/jira/browse/HDDS-2021
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Repor

[jira] [Work logged] (HDDS-1843) Undetectable corruption after restart of a datanode

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1843?focusedWorklogId=305360&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305360
 ]

ASF GitHub Bot logged work on HDDS-1843:


Author: ASF GitHub Bot
Created on: 03/Sep/19 00:34
Start Date: 03/Sep/19 00:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#issuecomment-527263446
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 580 | trunk passed |
   | +1 | compile | 406 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 908 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 473 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 693 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 37 | Maven dependency ordering for patch |
   | +1 | mvninstall | 609 | the patch passed |
   | +1 | compile | 399 | the patch passed |
   | +1 | cc | 399 | the patch passed |
   | +1 | javac | 399 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 25 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 643 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | the patch passed |
   | -1 | findbugs | 234 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 275 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2571 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 8591 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(ContainerProtos$ContainerCommandRequestProto,
 DispatcherContext) invokes inefficient new Long(long) constructor; use 
Long.valueOf(long) instead  At HddsDispatcher.java:Long(long) constructor; use 
Long.valueOf(long) instead  At HddsDispatcher.java:[line 241] |
   | Failed junit tests | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.container.metrics.TestContainerMetrics |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1364 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 23e0b2fdc078 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/4/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 

[jira] [Work logged] (HDDS-1949) Missing or error-prone test cleanup

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1949?focusedWorklogId=305359&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305359
 ]

ASF GitHub Bot logged work on HDDS-1949:


Author: ASF GitHub Bot
Created on: 03/Sep/19 00:34
Start Date: 03/Sep/19 00:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1365: HDDS-1949. 
Missing or error-prone test cleanup
URL: https://github.com/apache/hadoop/pull/1365#issuecomment-527263400
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 396 | trunk passed |
   | +1 | checkstyle | 86 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 867 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 482 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 691 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 592 | the patch passed |
   | +1 | compile | 453 | the patch passed |
   | +1 | javac | 453 | the patch passed |
   | +1 | checkstyle | 92 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 779 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 191 | the patch passed |
   | +1 | findbugs | 745 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 272 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2417 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 8633 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.om.TestOMRatisSnapshots |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1365 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 620107b136b4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/5/testReport/ |
   | Max. process+thread count | 4018 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305359)
Time Spent: 1h 10m  (was: 1h)

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Some integration tests do not

[jira] [Work logged] (HDDS-1783) Latency metric for applyTransaction in ContainerStateMachine

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1783?focusedWorklogId=305358&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305358
 ]

ASF GitHub Bot logged work on HDDS-1783:


Author: ASF GitHub Bot
Created on: 03/Sep/19 00:27
Start Date: 03/Sep/19 00:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1363: HDDS-1783 : 
Latency metric for applyTransaction in ContainerStateMach…
URL: https://github.com/apache/hadoop/pull/1363#issuecomment-527262612
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 645 | trunk passed |
   | +1 | compile | 429 | trunk passed |
   | +1 | checkstyle | 92 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 915 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | trunk passed |
   | 0 | spotbugs | 436 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 640 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for patch |
   | +1 | mvninstall | 553 | the patch passed |
   | +1 | compile | 386 | the patch passed |
   | +1 | javac | 386 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 731 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 658 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 290 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1716 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7924 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1363 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux d862b077c625 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/6/testReport/ |
   | Max. process+thread count | 5404 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/dist 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/6/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305358)
Time Spent: 3h 50m  (was: 3h 40m)

> Latency metric for applyTransaction in ContainerStateMachine
> 
>
> Key: HDDS-1783
> URL: https://issues.apache.org/jira/browse/HDDS-1783
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Aravindan Vijayan

[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=305357&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305357
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 03/Sep/19 00:25
Start Date: 03/Sep/19 00:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1361: HDDS-1553. Add 
metrics in rack aware container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#issuecomment-527262396
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 582 | trunk passed |
   | +1 | compile | 379 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 879 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 415 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 613 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 541 | the patch passed |
   | +1 | compile | 388 | the patch passed |
   | +1 | javac | 388 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 686 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 631 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 269 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1886 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 7747 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1361/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1361 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c5dc564ae7bd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1361/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1361/3/testReport/ |
   | Max. process+thread count | 5395 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1361/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305357)
Time Spent: 1h 40m  (was: 1.5h)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 

[jira] [Work logged] (HDDS-2030) Generate simplifed reports by the dev-support/checks/*.sh scripts

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2030?focusedWorklogId=305353&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305353
 ]

ASF GitHub Bot logged work on HDDS-2030:


Author: ASF GitHub Bot
Created on: 02/Sep/19 23:41
Start Date: 02/Sep/19 23:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1348: HDDS-2030. 
Generate simplifed reports by the dev-support/checks/*.sh scripts
URL: https://github.com/apache/hadoop/pull/1348#issuecomment-527258280
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 603 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 544 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | shellcheck | 0 | The patch generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 701 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 113 | hadoop-hdds in the patch passed. |
   | +1 | unit | 306 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 3409 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1348 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux e1e198b89db2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/21/artifact/out/diff-patch-shellcheck.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/21/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/21/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305353)
Time Spent: 15h 50m  (was: 15h 40m)

> Generate simplifed reports by the dev-support/checks/*.sh scripts
> -
>
> Key: HDDS-2030
> URL: https://issues.apache.org/jira/browse/HDDS-2030
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 15h 50m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains shell scripts to execute 
> different type of code checks (findbugs, checkstyle, etc.)
> Currently the contract is very simple. Every shell script executes one (and 
> only one) check and the shell response code is set according to the result 
> (non-zero code if failed).
> To have better reporting in the github pr build, it would be great to improve 
> the scripts to generate simple summary files and save the relevant files for 
> archiving.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apach

[jira] [Work logged] (HDDS-2030) Generate simplifed reports by the dev-support/checks/*.sh scripts

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2030?focusedWorklogId=305352&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305352
 ]

ASF GitHub Bot logged work on HDDS-2030:


Author: ASF GitHub Bot
Created on: 02/Sep/19 23:41
Start Date: 02/Sep/19 23:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1348: 
HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
URL: https://github.com/apache/hadoop/pull/1348#discussion_r320055906
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
 ##
 @@ -0,0 +1,66 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+REPORT_DIR=${REPORT_DIR:-$PWD}
+
+## generate summary txt file
+find "." -name 'TEST*.xml' -print0 \
+| xargs -n1 -0 "grep" -l -E " Generate simplifed reports by the dev-support/checks/*.sh scripts
> -
>
> Key: HDDS-2030
> URL: https://issues.apache.org/jira/browse/HDDS-2030
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 15h 40m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains shell scripts to execute 
> different type of code checks (findbugs, checkstyle, etc.)
> Currently the contract is very simple. Every shell script executes one (and 
> only one) check and the shell response code is set according to the result 
> (non-zero code if failed).
> To have better reporting in the github pr build, it would be great to improve 
> the scripts to generate simple summary files and save the relevant files for 
> archiving.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2018) Handle Set DtService of token for OM HA

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2018?focusedWorklogId=305350&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305350
 ]

ASF GitHub Bot logged work on HDDS-2018:


Author: ASF GitHub Bot
Created on: 02/Sep/19 23:36
Start Date: 02/Sep/19 23:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1371: HDDS-2018. 
Handle Set DtService of token for OM HA.
URL: https://github.com/apache/hadoop/pull/1371#issuecomment-527257838
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 365 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 922 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 429 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 628 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 539 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | the patch passed |
   | +1 | findbugs | 697 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 283 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1623 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7489 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1371 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux eae18d9bf983 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/3/testReport/ |
   | Max. process+thread count | 4993 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common U: hadoop-ozone/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1371/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305350)
Time Spent: 0.5h  (was: 20m)

> Handle Set DtService of token for OM HA
> ---
>
> Key: HDDS-2018
> URL: https://issues.apache.org/jira/browse/HDDS-2018
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  #

[jira] [Work logged] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1577?focusedWorklogId=305349&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305349
 ]

ASF GitHub Bot logged work on HDDS-1577:


Author: ASF GitHub Bot
Created on: 02/Sep/19 23:34
Start Date: 02/Sep/19 23:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1366: HDDS-1577. Add 
default pipeline placement policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#issuecomment-527257641
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 68 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ HDDS-1564 Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 736 | HDDS-1564 passed |
   | +1 | compile | 445 | HDDS-1564 passed |
   | +1 | checkstyle | 87 | HDDS-1564 passed |
   | +1 | mvnsite | 0 | HDDS-1564 passed |
   | +1 | shadedclient | 1089 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | HDDS-1564 passed |
   | 0 | spotbugs | 543 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 785 | HDDS-1564 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 89 | hadoop-ozone in the patch failed. |
   | -1 | compile | 57 | hadoop-ozone in the patch failed. |
   | -1 | javac | 57 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 754 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 101 | hadoop-ozone generated 129 new + 27 unchanged - 0 
fixed = 156 total (was 27) |
   | -1 | findbugs | 100 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 322 | hadoop-hdds in the patch passed. |
   | -1 | unit | 67 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 5831 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1366 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 0acc47c5b844 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | HDDS-1564 / b1eee8b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1366/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305349)
Time Spent: 1h 20m  (was: 1h 

[jira] [Work logged] (HDDS-2053) Fix TestOzoneManagerRatisServer failure

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2053?focusedWorklogId=305346&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305346
 ]

ASF GitHub Bot logged work on HDDS-2053:


Author: ASF GitHub Bot
Created on: 02/Sep/19 23:13
Start Date: 02/Sep/19 23:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1373: HDDS-2053. Fix 
TestOzoneManagerRatisServer failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#issuecomment-527255833
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 628 | trunk passed |
   | +1 | compile | 374 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 939 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 631 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 552 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 649 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 283 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1705 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7640 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1373 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8a011fbc359a 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/2/testReport/ |
   | Max. process+thread count | 5356 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305346)
Time Spent: 0.5h  (was: 20m)

> Fix TestOzoneManagerRatisServer failure
> ---
>
> Key: HDDS-2053
> URL: https://issues.apache.org/jira/browse/HDDS-2053
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mai

[jira] [Work logged] (HDDS-2048) State check during container state transition in datanode should be lock protected

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2048?focusedWorklogId=305344&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305344
 ]

ASF GitHub Bot logged work on HDDS-2048:


Author: ASF GitHub Bot
Created on: 02/Sep/19 23:11
Start Date: 02/Sep/19 23:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1375: HDDS-2048: State 
check during container state transition in datanode should be lock protected
URL: https://github.com/apache/hadoop/pull/1375#issuecomment-527255682
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 580 | trunk passed |
   | +1 | compile | 378 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 867 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 420 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 613 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 533 | the patch passed |
   | +1 | compile | 404 | the patch passed |
   | +1 | javac | 404 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 1 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 698 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | +1 | findbugs | 675 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 276 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1794 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 7595 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.om.TestOzoneManagerRestart |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1375 |
   | JIRA Issue | HDDS-2048 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cc8fecdd31b3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/3/testReport/ |
   | Max. process+thread count | 5057 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305344)
Time Spent: 20m  (was: 10m)

> State check during container state transition in datanode should be lock 
> protected
> --
>
> Key: HDDS-2048
> URL: https://issues.apache.org/jira/browse/HDDS-2048
> Project: Hadoop Distributed Data Store
>  Issue Typ

[jira] [Commented] (HDDS-1994) Compilation failure due to missing class ScmBlockLocationTestingClient

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921078#comment-16921078
 ] 

Hadoop QA commented on HDDS-1994:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} https://github.com/apache/hadoop/pull/1322 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/1322 |
| JIRA Issue | HDDS-1994 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1322/3/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Compilation failure due to missing class ScmBlockLocationTestingClient
> --
>
> Key: HDDS-1994
> URL: https://issues.apache.org/jira/browse/HDDS-1994
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The ozone build is failing due to following compilation error,
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-ozone-ozone-manager: Compilation 
> failure: Compilation failure:
> [ERROR] 
> /Users/hgadre/git-repo/upstream/hadoop/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java:[94,17]
>  cannot find symbol
> [ERROR]   symbol:   class ScmBlockLocationTestingClient
> [ERROR]   location: class org.apache.hadoop.ozone.om.TestKeyDeletingService
> [ERROR] 
> /Users/hgadre/git-repo/upstream/hadoop/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java:[116,17]
>  cannot find symbol
> [ERROR]   symbol:   class ScmBlockLocationTestingClient
> [ERROR]   location: class org.apache.hadoop.ozone.om.TestKeyDeletingService
> [ERROR] 
> /Users/hgadre/git-repo/upstream/hadoop/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java:[143,17]
>  cannot find symbol
> [ERROR]   symbol:   class ScmBlockLocationTestingClient
> [ERROR]   location: class org.apache.hadoop.ozone.om.TestKeyDeletingService
> [ERROR] -> [Help 1]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1994) Compilation failure due to missing class ScmBlockLocationTestingClient

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1994?focusedWorklogId=305345&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305345
 ]

ASF GitHub Bot logged work on HDDS-1994:


Author: ASF GitHub Bot
Created on: 02/Sep/19 23:11
Start Date: 02/Sep/19 23:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1322:  [HDDS-1994] 
Rename ScmBlockLocationTestIngClient.java to ScmBlockLocationTestingClient.java
URL: https://github.com/apache/hadoop/pull/1322#issuecomment-527255733
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/1322 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1322 |
   | JIRA Issue | HDDS-1994 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1322/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305345)
Time Spent: 40m  (was: 0.5h)

> Compilation failure due to missing class ScmBlockLocationTestingClient
> --
>
> Key: HDDS-1994
> URL: https://issues.apache.org/jira/browse/HDDS-1994
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The ozone build is failing due to following compilation error,
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-ozone-ozone-manager: Compilation 
> failure: Compilation failure:
> [ERROR] 
> /Users/hgadre/git-repo/upstream/hadoop/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java:[94,17]
>  cannot find symbol
> [ERROR]   symbol:   class ScmBlockLocationTestingClient
> [ERROR]   location: class org.apache.hadoop.ozone.om.TestKeyDeletingService
> [ERROR] 
> /Users/hgadre/git-repo/upstream/hadoop/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java:[116,17]
>  cannot find symbol
> [ERROR]   symbol:   class ScmBlockLocationTestingClient
> [ERROR]   location: class org.apache.hadoop.ozone.om.TestKeyDeletingService
> [ERROR] 
> /Users/hgadre/git-repo/upstream/hadoop/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java:[143,17]
>  cannot find symbol
> [ERROR]   symbol:   class ScmBlockLocationTestingClient
> [ERROR]   location: class org.apache.hadoop.ozone.om.TestKeyDeletingService
> [ERROR] -> [Help 1]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2048) State check during container state transition in datanode should be lock protected

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921077#comment-16921077
 ] 

Hadoop QA commented on HDDS-2048:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  7m  
0s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
36s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 54s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
|   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
|   | hadoop.ozone.om.TestSecureOzoneManager |
|   | hadoop.ozone.om.TestOzoneManagerRestart |
|   | hadoop.ozone.om.TestOMDbCheckpointServlet |
|   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1375 |
| JIRA Issue | HDDS-2048 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
uni

[jira] [Work logged] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?focusedWorklogId=305343&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305343
 ]

ASF GitHub Bot logged work on HDDS-2020:


Author: ASF GitHub Bot
Created on: 02/Sep/19 22:52
Start Date: 02/Sep/19 22:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1369: HDDS-2020. 
Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-527254153
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for branch |
   | +1 | mvninstall | 739 | trunk passed |
   | -1 | compile | 54 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 101 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1078 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 241 | trunk passed |
   | 0 | spotbugs | 268 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 44 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 35 | Maven dependency ordering for patch |
   | -1 | mvninstall | 47 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 35 | hadoop-hdds in the patch failed. |
   | -1 | compile | 27 | hadoop-ozone in the patch failed. |
   | -1 | cc | 35 | hadoop-hdds in the patch failed. |
   | -1 | cc | 27 | hadoop-ozone in the patch failed. |
   | -1 | javac | 35 | hadoop-hdds in the patch failed. |
   | -1 | javac | 27 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 26 | The patch fails to run checkstyle in hadoop-hdds |
   | -0 | checkstyle | 24 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 5 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 805 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 32 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 27 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 64 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 26 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 38 | hadoop-hdds in the patch failed. |
   | -1 | unit | 30 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 4159 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 40291458a36c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/4/artifact/out/

[jira] [Work logged] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?focusedWorklogId=305342&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305342
 ]

ASF GitHub Bot logged work on HDDS-2057:


Author: ASF GitHub Bot
Created on: 02/Sep/19 22:44
Start Date: 02/Sep/19 22:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1377: HDDS-2057. 
Incorrect Default OM Port in Ozone FS URI Error Message. Contributed by 
Supratim Deka
URL: https://github.com/apache/hadoop/pull/1377#issuecomment-527253466
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 772 | trunk passed |
   | +1 | compile | 456 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1095 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 223 | trunk passed |
   | 0 | spotbugs | 549 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 812 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 677 | the patch passed |
   | +1 | compile | 471 | the patch passed |
   | +1 | javac | 471 | the patch passed |
   | -0 | checkstyle | 48 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 851 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 207 | the patch passed |
   | -1 | findbugs | 228 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 53 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 42 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 63 | The patch does not generate ASF License warnings. |
   | | | 6409 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1377 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4c03a41eabe6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/1/artifact/out/patch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/1/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1377/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305342)
Time Spent: 40m  (was: 0.5h)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https:/

[jira] [Commented] (HDFS-13157) Do Not Remove Blocks Sequentially During Decommission

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921065#comment-16921065
 ] 

Hadoop QA commented on HDFS-13157:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
39s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 162 unchanged - 1 fixed = 164 total (was 163) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestGetBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1391/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1391 |
| JIRA Issue | HDFS-13157 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 4b8395cfeab6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 915cbc9 |
| Default Java | 1.8.0_222 |
| checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1391/

[jira] [Work logged] (HDDS-2065) Implement OMNodeDetails#toString

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2065?focusedWorklogId=305335&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305335
 ]

ASF GitHub Bot logged work on HDDS-2065:


Author: ASF GitHub Bot
Created on: 02/Sep/19 22:14
Start Date: 02/Sep/19 22:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1387: HDDS-2065. 
Implement OMNodeDetails#toString
URL: https://github.com/apache/hadoop/pull/1387#issuecomment-527250877
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 590 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 879 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 421 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 615 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 536 | the patch passed |
   | +1 | compile | 386 | the patch passed |
   | +1 | javac | 386 | the patch passed |
   | +1 | checkstyle | 89 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 675 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | the patch passed |
   | +1 | findbugs | 625 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 266 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1860 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 7611 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1387/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1387 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f9e4f2b5182b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1387/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1387/1/testReport/ |
   | Max. process+thread count | 5404 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1387/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305335)
Time Spent: 40m  (was: 0.5h)

> Implement OMNodeDetails#toString
> 
>
> Key: HDDS-2065
> URL: https://issues.apache.org/jira/browse/HDDS-2065
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Wrote this snippet while debugging OM HA. Might be useful for oth

[jira] [Work logged] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=305333&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305333
 ]

ASF GitHub Bot logged work on HDDS-2015:


Author: ASF GitHub Bot
Created on: 02/Sep/19 21:55
Start Date: 02/Sep/19 21:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#issuecomment-527248896
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 53 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | +1 | mvninstall | 760 | trunk passed |
   | +1 | compile | 455 | trunk passed |
   | +1 | checkstyle | 86 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | -1 | shadedclient | 359 | branch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 81 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 66 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 624 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 41 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 71 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 39 | Maven dependency ordering for patch |
   | +1 | mvninstall | 739 | the patch passed |
   | +1 | compile | 514 | the patch passed |
   | +1 | javac | 514 | the patch passed |
   | +1 | checkstyle | 103 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 953 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 108 | hadoop-hdds generated 13 new + 3 unchanged - 0 fixed 
= 16 total (was 3) |
   | -1 | javadoc | 129 | hadoop-ozone generated 26 new + 1 unchanged - 0 fixed 
= 27 total (was 1) |
   | -1 | findbugs | 517 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 336 | hadoop-hdds in the patch failed. |
   | -1 | unit | 76 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6079 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.safemode.TestSCMSafeModeManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1386 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 01cb3d2ae667 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1386/1/testReport/ |
   | Max. process+thread count | 585 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://buil

[jira] [Commented] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921048#comment-16921048
 ] 

Hadoop QA commented on HDFS-14699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14699 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979164/HDFS-14699.05.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bb28173f6b18 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 915cbc9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27764/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27764/testReport/ |
| Max. process+thread count | 2977 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27764/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org 

[jira] [Updated] (HDFS-14777) RBF: Set ReadOnly is failing for mount Table but actually readonly succed to set

2019-09-02 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14777:
-
Attachment: HDFS-14777.004.patch

> RBF: Set ReadOnly is failing for mount Table but actually readonly succed to 
> set
> 
>
> Key: HDFS-14777
> URL: https://issues.apache.org/jira/browse/HDFS-14777
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14777.001.patch, HDFS-14777.002.patch, 
> HDFS-14777.003.patch, HDFS-14777.004.patch
>
>
> # hdfs dfsrouteradmin -update /test hacluster /test -readonly /opt/client # 
> hdfs dfsrouteradmin -update /test hacluster /test -readonly update: /test is 
> in a read only mount 
> pointorg.apache.hadoop.ipc.RemoteException(java.io.IOException): /test is in 
> a read only mount point at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1419)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.getQuotaRemoteLocations(Quota.java:217)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.Quota.setQuota(Quota.java:75) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.synchronizeQuota(RouterAdminServer.java:288)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterAdminServer.updateMountTableEntry(RouterAdminServer.java:267)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-09-02 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14519:
-
Attachment: HDFS-14519.002.patch

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14519.001.patch, HDFS-14519.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2069) Default values of property hdds.datanode.storage.utilization.critical.threshold and hdds.datanode.storage.utilization.warning.threshold are not reasonable

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2069?focusedWorklogId=305305&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305305
 ]

ASF GitHub Bot logged work on HDDS-2069:


Author: ASF GitHub Bot
Created on: 02/Sep/19 18:40
Start Date: 02/Sep/19 18:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1393: HDDS-2069. 
Default values of property hdds.datanode.storage.utilizati…
URL: https://github.com/apache/hadoop/pull/1393#issuecomment-527221598
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 612 | trunk passed |
   | +1 | compile | 381 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 854 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 461 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 671 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 558 | the patch passed |
   | +1 | compile | 393 | the patch passed |
   | +1 | javac | 393 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 697 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | +1 | findbugs | 710 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 323 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1700 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7694 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestDeleteWithSlowFollower |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1393/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1393 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3c6889294b1a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 040f6e9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1393/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1393/1/testReport/ |
   | Max. process+thread count | 4180 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1393/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305305)
Time Spent: 20m  (was: 10m)

> Default values of property 
> hdds.datanode.storage.utilization.critical.threshold and 
> hdds.datanode.storage.utilization.warning.threshold are not reasonable
> --
>
> Key: HDDS-2069
> URL: https://issues.apache.org/jira/browse/HDDS-2069
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>

[jira] [Commented] (HDFS-14810) review FSNameSystem editlog sync

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921004#comment-16921004
 ] 

Hadoop QA commented on HDFS-14810:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14810 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979126/HDFS-14810.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4ece6c9c7e7d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 040f6e9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27763/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27763/testReport/ |
| Max. process+thread count | 5097 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hd

[jira] [Commented] (HDFS-14810) review FSNameSystem editlog sync

2019-09-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921000#comment-16921000
 ] 

Ayush Saxena commented on HDFS-14810:
-

Thanx [~hexiaoqiao] for confirming.
[~jojochuang] anything you would like to add?

> review FSNameSystem editlog sync
> 
>
> Key: HDFS-14810
> URL: https://issues.apache.org/jira/browse/HDFS-14810
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14810.001.patch, HDFS-14810.002.patch, 
> HDFS-14810.003.patch
>
>
> refactor and unified type of edit log sync in FSNamesystem as HDFS-11246 
> mentioned.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14505) "touchz" command should check quota limit before deleting an already existing file

2019-09-02 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14505:


Assignee: hemanthboyina

> "touchz" command should check quota limit before deleting an already existing 
> file
> --
>
> Key: HDFS-14505
> URL: https://issues.apache.org/jira/browse/HDFS-14505
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: hemanthboyina
>Priority: Major
>
> {code:java}
> HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2
> 2019-05-21 15:14:01,080 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Found 1 items
> -rw-r--r--   1 sbanerjee hadoop          0 2019-05-21 15:10 /dir2/file4
> HW15685:bin sbanerjee$ ./hdfs dfs -touchz /dir2/file4
> 2019-05-21 15:14:12,247 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> touchz: The NameSpace quota (directories and files) of directory /dir2 is 
> exceeded: quota=3 file count=5
> HW15685:bin sbanerjee$ ./hdfs dfs -ls /dir2
> 2019-05-21 15:14:20,607 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> {code}
> Here, the "touchz" command failed to create the file as the quota limit was 
> hit, but ended up deleting the original file which existed. It should do the 
> quota check before deleting the file so that after successful deletion, 
> creation should succeed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12831) HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)

2019-09-02 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920997#comment-16920997
 ] 

hemanthboyina commented on HDFS-12831:
--

[~hanishakoneru] can i take up this Jira ?

> HDFS throws FileNotFoundException on getFileBlockLocations(path-to-directory)
> -
>
> Key: HDFS-12831
> URL: https://issues.apache.org/jira/browse/HDFS-12831
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Hanisha Koneru
>Priority: Major
>
> The HDFS implementation of {{getFileBlockLocations(path, offset, len)}} 
> throws an exception if the path references a directory. 
> The base implementation (and all other filesystems) just return an empty 
> array, something implemented in {{getFileBlockLocations(filestatsus, offset, 
> len)}}; something written up in filesystem.md as the correct behaviour. 
> # has been shown to break things: SPARK-14959
> # there's no contract tests for these APIs; shows up in HADOOP-15044. 
> # even if this is considered a wontfix, it should raise something like 
> {{PathIsDirectoryException}} rather than FNFE



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14810) review FSNameSystem editlog sync

2019-09-02 Thread He Xiaoqiao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920981#comment-16920981
 ] 

He Xiaoqiao commented on HDFS-14810:


check four failed unit tests are all related with OOM, and re-run them at 
local, all passed. It seems to be not related with this changes. FYI.

> review FSNameSystem editlog sync
> 
>
> Key: HDFS-14810
> URL: https://issues.apache.org/jira/browse/HDFS-14810
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14810.001.patch, HDFS-14810.002.patch, 
> HDFS-14810.003.patch
>
>
> refactor and unified type of edit log sync in FSNamesystem as HDFS-11246 
> mentioned.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-09-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14706.

Resolution: Fixed

Done. Reverted the commits and pushed 08 patch to trunk branch-3.2 and 
branch-3.1.

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14706.001.patch, HDFS-14706.002.patch, 
> HDFS-14706.003.patch, HDFS-14706.004.patch, HDFS-14706.005.patch, 
> HDFS-14706.006.patch, HDFS-14706.007.patch, HDFS-14706.008.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-09-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HDFS-14706:


Reopen. For future reference, I committed the wrong patch.

I'm going to revert and re-apply the patch. Thanks [~pifta] and [~sodonnell] 
for figuring this out.

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14706.001.patch, HDFS-14706.002.patch, 
> HDFS-14706.003.patch, HDFS-14706.004.patch, HDFS-14706.005.patch, 
> HDFS-14706.006.patch, HDFS-14706.007.patch, HDFS-14706.008.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920940#comment-16920940
 ] 

Zhao Yi Ming edited comment on HDFS-14699 at 9/2/19 3:47 PM:
-

[~ayushtkn]   Sorry for the misunderstand! and Thanks for your explain!  

I changed the code to avoid recalculating the block index. For the UT I make a 
little mistake - NOT increment Pending Replications in the node which have the 
dup EC internal block. Now The UT code also updated. Could you help review 
again? Thanks!

 


was (Author: zhaoyim):
[~ayushtkn]   Sorry for the misunderstand! and Thanks for your explain!  

I changed the code to avoid recalculating the block index. For the UT I make a 
little mistake -  

NOT increment Pending Replications in the node which have the dup EC internal 
block. Now The UT code also updated. Could you help review again? Thanks!

 

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> HDFS-14699.05.patch, image-2019-08-20-19-58-51-872.png, 
> image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated HDFS-14699:

Attachment: HDFS-14699.05.patch
Status: Patch Available  (was: Open)

[~ayushtkn]   Sorry for the misunderstand! and Thanks for your explain!  

I changed the code to avoid recalculating the block index. For the UT I make a 
little mistake -  

NOT increment Pending Replications in the node which have the dup EC internal 
block. Now The UT code also updated. Could you help review again? Thanks!

 

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.1, 3.2.0, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> HDFS-14699.05.patch, image-2019-08-20-19-58-51-872.png, 
> image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated HDFS-14699:

Comment: was deleted

(was:  

 )

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated HDFS-14699:

Status: Open  (was: Patch Available)

 

 

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.1.1, 3.2.0, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-02 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920925#comment-16920925
 ] 

Lisheng Sun commented on HDFS-6524:
---

hi [~jojochuang] [~ayushtkn]  Could you help take a review for this issue? 
Thank you.

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14798) Synchronize invalidateBlocks in DatanodeDescriptor

2019-09-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920924#comment-16920924
 ] 

Ayush Saxena commented on HDFS-14798:
-

Ideally I believe we should refrain from any changes, for which we don't have 
any fixed reason for doing. To make a change either it should be a bug, not 
could be a bug, which we should properly cover with a test, or an improvement. 
which this doesn't tend to be one.
Adding synchronization may look like a preventive measure and low cost but may 
have its own repercussions, "premature optimization is the root of all evils" 
as it is claimed. So better we hold it up for sometime. And I would try writing 
a UT to check if it can cause a trouble or not, That is what max I can assure. 
:)

> Synchronize invalidateBlocks in DatanodeDescriptor
> --
>
> Key: HDFS-14798
> URL: https://issues.apache.org/jira/browse/HDFS-14798
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: hemanthboyina
>Priority: Minor
>  Labels: n00b, newbie
> Attachments: HDFS-14798.001.patch
>
>
> {code:java|title=DatanodeDescriptor.java}
> public void resetBlocks() {
>   ...
>   this.invalidateBlocks.clear();
>   ...
> }
> public void clearBlockQueues() {
>   synchronized (invalidateBlocks) {
> this.invalidateBlocks.clear();
>   }
>   ...
> }
> {code}
> It may not be strictly necessary, but why risk it? The invalidateBlocks 
> should be protected in {{resetBlocks()}} just like it is in 
> {{clearBlockQueues()}}/



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14751) TestNameNodeMetadataConsistency#testGenerationStampInFuture fail in trunk

2019-09-02 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920915#comment-16920915
 ] 

Lisheng Sun commented on HDFS-14751:


hi [~seanlook] i agree your idea.  

we can synchronize diffs,guarantee diffs can't be modified by multiple threads.

> TestNameNodeMetadataConsistency#testGenerationStampInFuture fail in trunk
> -
>
> Key: HDFS-14751
> URL: https://issues.apache.org/jira/browse/HDFS-14751
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14751.001.patch
>
>
> {code:java}
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.693 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency
> [ERROR] 
> testGenerationStampInFuture(org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency)
>   Time elapsed: 7.572 s  <<< ERROR!
> java.util.ConcurrentModificationException
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
>   at java.util.ArrayList$Itr.next(ArrayList.java:859)
>   at 
> com.google.common.collect.AbstractMapBasedMultimap$Itr.next(AbstractMapBasedMultimap.java:1153)
>   at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:433)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils.runDirectoryScanner(DataNodeTestUtils.java:202)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture(TestNameNodeMetadataConsistency.java:92)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> Ref:[https://builds.apache.org/job/PreCommit-HDFS-Build/27567/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10663) Comparison of two System.nanoTime methods return values are against standard java recommendations.

2019-09-02 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-10663:
-
Attachment: HDFS-10663.001.patch
Status: Patch Available  (was: Open)

> Comparison of two System.nanoTime methods return values are against standard 
> java recommendations.
> --
>
> Key: HDFS-10663
> URL: https://issues.apache.org/jira/browse/HDFS-10663
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Rushabh S Shah
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-10663.001.patch
>
>
> I was chasing a bug where the namenode didn't declare a datanode dead even 
> when the last contact time was 2.5 hours before.
> Before I could debug, the datanode was re-imaged (all the logs were deleted) 
> and the namenode was restarted and upgraded to new software.
> While debugging, I came across this heartbeat check code where the comparison 
> of two System.nanoTime is against the java's recommended way.
> Here is the hadoop code:
> {code:title=DatanodeManager.java|borderStyle=solid}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdateMonotonic() <
> (monotonicNow() - heartbeatExpireInterval));
>   }
> {code}
> The montonicNow() is calculated as:
> {code:title=Time.java|borderStyle=solid}
>   public static long monotonicNow() {
> final long NANOSECONDS_PER_MILLISECOND = 100;
> return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
>   }
> {code}
> As per javadoc of System.nanoTime, it is clearly stated that we should 
> subtract two nano time output 
> {noformat}
> To compare two nanoTime values
>  long t0 = System.nanoTime();
>  ...
>  long t1 = System.nanoTime();
> one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
> numerical overflow.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10663) Comparison of two System.nanoTime methods return values are against standard java recommendations.

2019-09-02 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-10663:


Assignee: hemanthboyina

> Comparison of two System.nanoTime methods return values are against standard 
> java recommendations.
> --
>
> Key: HDFS-10663
> URL: https://issues.apache.org/jira/browse/HDFS-10663
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Rushabh S Shah
>Assignee: hemanthboyina
>Priority: Major
>
> I was chasing a bug where the namenode didn't declare a datanode dead even 
> when the last contact time was 2.5 hours before.
> Before I could debug, the datanode was re-imaged (all the logs were deleted) 
> and the namenode was restarted and upgraded to new software.
> While debugging, I came across this heartbeat check code where the comparison 
> of two System.nanoTime is against the java's recommended way.
> Here is the hadoop code:
> {code:title=DatanodeManager.java|borderStyle=solid}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdateMonotonic() <
> (monotonicNow() - heartbeatExpireInterval));
>   }
> {code}
> The montonicNow() is calculated as:
> {code:title=Time.java|borderStyle=solid}
>   public static long monotonicNow() {
> final long NANOSECONDS_PER_MILLISECOND = 100;
> return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
>   }
> {code}
> As per javadoc of System.nanoTime, it is clearly stated that we should 
> subtract two nano time output 
> {noformat}
> To compare two nanoTime values
>  long t0 = System.nanoTime();
>  ...
>  long t1 = System.nanoTime();
> one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
> numerical overflow.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10663) Comparison of two System.nanoTime methods return values are against standard java recommendations.

2019-09-02 Thread Rushabh S Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920898#comment-16920898
 ] 

Rushabh S Shah commented on HDFS-10663:
---

[~hemanthboyina] please go ahead and assign it to yourself. Thanks for the 
interest.

> Comparison of two System.nanoTime methods return values are against standard 
> java recommendations.
> --
>
> Key: HDFS-10663
> URL: https://issues.apache.org/jira/browse/HDFS-10663
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Rushabh S Shah
>Priority: Major
>
> I was chasing a bug where the namenode didn't declare a datanode dead even 
> when the last contact time was 2.5 hours before.
> Before I could debug, the datanode was re-imaged (all the logs were deleted) 
> and the namenode was restarted and upgraded to new software.
> While debugging, I came across this heartbeat check code where the comparison 
> of two System.nanoTime is against the java's recommended way.
> Here is the hadoop code:
> {code:title=DatanodeManager.java|borderStyle=solid}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdateMonotonic() <
> (monotonicNow() - heartbeatExpireInterval));
>   }
> {code}
> The montonicNow() is calculated as:
> {code:title=Time.java|borderStyle=solid}
>   public static long monotonicNow() {
> final long NANOSECONDS_PER_MILLISECOND = 100;
> return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
>   }
> {code}
> As per javadoc of System.nanoTime, it is clearly stated that we should 
> subtract two nano time output 
> {noformat}
> To compare two nanoTime values
>  long t0 = System.nanoTime();
>  ...
>  long t1 = System.nanoTime();
> one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
> numerical overflow.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10663) Comparison of two System.nanoTime methods return values are against standard java recommendations.

2019-09-02 Thread Rushabh S Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah reassigned HDFS-10663:
-

Assignee: (was: Rushabh S Shah)

> Comparison of two System.nanoTime methods return values are against standard 
> java recommendations.
> --
>
> Key: HDFS-10663
> URL: https://issues.apache.org/jira/browse/HDFS-10663
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Rushabh S Shah
>Priority: Major
>
> I was chasing a bug where the namenode didn't declare a datanode dead even 
> when the last contact time was 2.5 hours before.
> Before I could debug, the datanode was re-imaged (all the logs were deleted) 
> and the namenode was restarted and upgraded to new software.
> While debugging, I came across this heartbeat check code where the comparison 
> of two System.nanoTime is against the java's recommended way.
> Here is the hadoop code:
> {code:title=DatanodeManager.java|borderStyle=solid}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdateMonotonic() <
> (monotonicNow() - heartbeatExpireInterval));
>   }
> {code}
> The montonicNow() is calculated as:
> {code:title=Time.java|borderStyle=solid}
>   public static long monotonicNow() {
> final long NANOSECONDS_PER_MILLISECOND = 100;
> return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
>   }
> {code}
> As per javadoc of System.nanoTime, it is clearly stated that we should 
> subtract two nano time output 
> {noformat}
> To compare two nanoTime values
>  long t0 = System.nanoTime();
>  ...
>  long t1 = System.nanoTime();
> one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
> numerical overflow.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14812) RBF: MountTableRefresherService should load cache when refresh

2019-09-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920894#comment-16920894
 ] 

Ayush Saxena commented on HDFS-14812:
-

Thanx [~xuzq_zander] for the report. I need to check the code full.

I think it was there but was removed post this comment : 
https://issues.apache.org/jira/browse/HDFS-13443?focusedCommentId=16471826&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16471826
What problem you are facing?
Well [~arshad.mohammad] can you give a check once

> RBF: MountTableRefresherService should load cache when refresh
> --
>
> Key: HDFS-14812
> URL: https://issues.apache.org/jira/browse/HDFS-14812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14812-trunk-001.patch
>
>
> MountTableRefresherService should load routerStore when refresh mount table.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10663) Comparison of two System.nanoTime methods return values are against standard java recommendations.

2019-09-02 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920888#comment-16920888
 ] 

hemanthboyina commented on HDFS-10663:
--

hi [~shahrs87] can i take up this Jira ?

> Comparison of two System.nanoTime methods return values are against standard 
> java recommendations.
> --
>
> Key: HDFS-10663
> URL: https://issues.apache.org/jira/browse/HDFS-10663
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
>
> I was chasing a bug where the namenode didn't declare a datanode dead even 
> when the last contact time was 2.5 hours before.
> Before I could debug, the datanode was re-imaged (all the logs were deleted) 
> and the namenode was restarted and upgraded to new software.
> While debugging, I came across this heartbeat check code where the comparison 
> of two System.nanoTime is against the java's recommended way.
> Here is the hadoop code:
> {code:title=DatanodeManager.java|borderStyle=solid}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdateMonotonic() <
> (monotonicNow() - heartbeatExpireInterval));
>   }
> {code}
> The montonicNow() is calculated as:
> {code:title=Time.java|borderStyle=solid}
>   public static long monotonicNow() {
> final long NANOSECONDS_PER_MILLISECOND = 100;
> return System.nanoTime() / NANOSECONDS_PER_MILLISECOND;
>   }
> {code}
> As per javadoc of System.nanoTime, it is clearly stated that we should 
> subtract two nano time output 
> {noformat}
> To compare two nanoTime values
>  long t0 = System.nanoTime();
>  ...
>  long t1 = System.nanoTime();
> one should use t1 - t0 < 0, not t1 < t0, because of the possibility of 
> numerical overflow.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2019-09-02 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920883#comment-16920883
 ] 

hemanthboyina commented on HDFS-10348:
--

attached patch , pls review [~shv]

> Namenode report bad block method doesn't check whether the block belongs to 
> datanode before adding it to corrupt replicas map.
> --
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0, 3.1.2
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
> Attachments: HDFS-10348-1.patch, HDFS-10348.003.patch, 
> HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block 
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from   
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1  to corrrupt replicas map   and 
> ask one of the good node (one of the 2 nodes) to replicate the block to 
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to 
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
>It also removed N3's storage from triplets and queued an invalidate 
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to 
> node N3.
>C2 also encountered the checksum exception and reported bad block to 
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas 
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode 
> simply ignores the report since the N3's storage is no longer in the 
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the 
> corruptReplciasMap. 
> Since on removing the node, we only goes through the block which are present 
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via 
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the 
> BlockManager#markBlockAsCorrupt instead of 
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
>   storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
>   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
>   blk, dn);
>   return;
> }
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10348) Namenode report bad block method doesn't check whether the block belongs to datanode before adding it to corrupt replicas map.

2019-09-02 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-10348:
-
   Attachment: HDFS-10348.003.patch
Affects Version/s: 3.1.2
   Status: Patch Available  (was: Open)

> Namenode report bad block method doesn't check whether the block belongs to 
> datanode before adding it to corrupt replicas map.
> --
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.2, 2.7.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
> Attachments: HDFS-10348-1.patch, HDFS-10348.003.patch, 
> HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block 
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from   
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1  to corrrupt replicas map   and 
> ask one of the good node (one of the 2 nodes) to replicate the block to 
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to 
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
>It also removed N3's storage from triplets and queued an invalidate 
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to 
> node N3.
>C2 also encountered the checksum exception and reported bad block to 
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas 
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode 
> simply ignores the report since the N3's storage is no longer in the 
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the 
> corruptReplciasMap. 
> Since on removing the node, we only goes through the block which are present 
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via 
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the 
> BlockManager#markBlockAsCorrupt instead of 
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
>   storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
>   blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
>   blk, dn);
>   return;
> }
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14812) RBF: MountTableRefresherService should load cache when refresh

2019-09-02 Thread xuzq (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14812:

Attachment: HDFS-14812-trunk-001.patch
  Assignee: xuzq
Status: Patch Available  (was: Open)

> RBF: MountTableRefresherService should load cache when refresh
> --
>
> Key: HDFS-14812
> URL: https://issues.apache.org/jira/browse/HDFS-14812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14812-trunk-001.patch
>
>
> MountTableRefresherService should load routerStore when refresh mount table.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14812) RBF: MountTableRefresherService should load cache when refresh

2019-09-02 Thread xuzq (Jira)
xuzq created HDFS-14812:
---

 Summary: RBF: MountTableRefresherService should load cache when 
refresh
 Key: HDFS-14812
 URL: https://issues.apache.org/jira/browse/HDFS-14812
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: xuzq


MountTableRefresherService should load routerStore when refresh mount table.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2019-09-02 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920879#comment-16920879
 ] 

Lisheng Sun commented on HDFS-12904:


hi [~elgoiri] Should we commit this patch to trunk? Thank you.

> Add DataTransferThrottler to the Datanode transfers
> ---
>
> Key: HDFS-12904
> URL: https://issues.apache.org/jira/browse/HDFS-12904
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Íñigo Goiri
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-12904.000.patch, HDFS-12904.001.patch, 
> HDFS-12904.002.patch, HDFS-12904.003.patch, HDFS-12904.005.patch, 
> HDFS-12904.006.patch
>
>
> The {{DataXceiverServer}} already uses throttling for the balancing. The 
> Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920873#comment-16920873
 ] 

Ayush Saxena commented on HDFS-14699:
-

Let me try once again to explain :
Say, I applied your patch and just kept the UT and removed the fix you made, 
the UT should fail but it didn't.

Or In other words :
You in your local just have this UT without your fix, it will pass, which 
ideally it should fail.

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11617) Datanode should delete the block from rbw directory when it finds duplicate in finalized directory.

2019-09-02 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920866#comment-16920866
 ] 

hemanthboyina commented on HDFS-11617:
--

[~jojochuang] [~brahmareddy] 

_if we have to choose between rbw replica and finalized replica (assuming size 
and genstamp are same), we should delete rbw replica, not finalized replica._

should we go ahead with this ?

> Datanode should delete the block from rbw directory when it finds duplicate 
> in finalized directory.
> ---
>
> Key: HDFS-11617
> URL: https://issues.apache.org/jira/browse/HDFS-11617
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Priority: Major
>
> Recently we had power failure event and we hit HDFS-5042.
> There were missing blocks but datanode had the copy of the block (and meta 
> file) in rbw directory.
> I manually copied the block and meta file to finalized directory and 
> restarted the datanode.
> But after restart, the block somehow got deleted from the finalized directory.
> So I think the datanode tried to resolve duplicate replicas and in process of 
> resolving it deleted the replica from finalized directory.
> In my opinion, if we have to choose between rbw replica and finalized replica 
> (assuming size and genstamp are same), we should delete rbw replica,  not 
> finalized replica.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14739) RBF: LS command for mount point shows wrong owner and permission information.

2019-09-02 Thread xuzq (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14739:

Attachment: HDFS-14739-trunk-003.patch

> RBF: LS command for mount point shows wrong owner and permission information.
> -
>
> Key: HDFS-14739
> URL: https://issues.apache.org/jira/browse/HDFS-14739
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14739-trunk-001.patch, HDFS-14739-trunk-002.patch, 
> HDFS-14739-trunk-003.patch, image-2019-08-16-17-15-50-614.png, 
> image-2019-08-16-17-16-00-863.png, image-2019-08-16-17-16-34-325.png
>
>
> ||source||target namespace||destination||owner||group||permission||
> |/mnt|ns0|/mnt|mnt|mnt_group|755|
> |/mnt/test1|ns1|/mnt/test1|mnt_test1|mnt_test1_group|755|
> |/test1|ns1|/test1|test1|test1_group|755|
> When do getListing("/mnt"), the owner of  */mnt/test1* should be *mnt_test1* 
> instead of *test1* in result.
>  
> And if the mount table as blew, we should support getListing("/mnt") instead 
> of throw IOException when dfs.federation.router.default.nameservice.enable is 
> false.
> ||source||target namespace||destination||owner||group||permission||
> |/mnt/test1|ns0|/mnt/test1|test1|test1|755|
> |/mnt/test2|ns1|/mnt/test2|test2|test2|755|
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11616) Namenode doesn't mark the block as non-corrupt if the reason for corruption was INVALID_STATE

2019-09-02 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920863#comment-16920863
 ] 

hemanthboyina commented on HDFS-11616:
--

{code:java}
   else { // COMPLETE block, same genstamp
if (reportedState == ReplicaState.RBW) {
  .
  LOG.info("Received an RBW replica for {} on {}: ignoring it, since "
  + "it is complete with the same genstamp", storedBlock, dn);
  return null;
} else {
  return new BlockToMarkCorrupt(new Block(reported), storedBlock,
  "reported replica has invalid state " + reportedState,
  Reason.INVALID_STATE);
} {code}
we add replica to corrupt map , with reason INVALID state
but while removing  from corrupt map we only  check reason GENSTAMP_MISMATCH.

the bug exists , any suggestions [~shahrs87] [~jojochuang] ??

> Namenode doesn't mark the block as non-corrupt if the reason for corruption 
> was INVALID_STATE
> -
>
> Key: HDFS-11616
> URL: https://issues.apache.org/jira/browse/HDFS-11616
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: Rushabh S Shah
>Priority: Major
>
> Due to power failure event, we hit HDFS-5042.
> We lost many racks across the cluster.
> There were couple of missing blocks.
> For a  given missing block, following is the output of fsck.
> {noformat}
> [hdfs@XXX rushabhs]$ hdfs fsck -blockId blk_8566436445
> Connecting to namenode via 
> http://nn1:50070/fsck?ugi=hdfs&blockId=blk_8566436445+&path=%2F
> FSCK started by hdfs (auth:KERBEROS_SSL) from XXX at Mon Apr 03 16:22:48 UTC 
> 2017
> Block Id: blk_8566436445
> Block belongs to: 
> No. of Expected Replica: 3
> No. of live Replica: 0
> No. of excess Replica: 0
> No. of stale Replica: 0
> No. of decommissioned Replica: 0
> No. of decommissioning Replica: 0
> No. of corrupted Replica: 3
> Block replica on datanode/rack: datanodeA is CORRUPT   ReasonCode: 
> INVALID_STATE
> Block replica on datanode/rack: datanodeB is CORRUPT   ReasonCode: 
> INVALID_STATE
> Block replica on datanode/rack: datanodeC is CORRUPT   ReasonCode: 
> INVALID_STATE
> {noformat}
> After the power event, when we restarted the datanode, the blocks were in rbw 
> directory.
> When full block report is sent to namenode, all the blocks from rbw directory 
> gets converted into RWR state and the namenode marked it as corrupt with 
> reason Reason.INVALID_STATE.
> After sometime (in this case after 31 hours) when I went to recover missing 
> blocks, I noticed the following things.
> All the datanodes has their copy of the block in rbw directory but the file 
> was complete according to namenode.
> All the replicas had the right size and correct genstamp and {{hdfs debug 
> verify}} command also succeeded.
> I went to dnA and moved the block from rbw directory to finalized directory.
> Restarted the datanode (making sure the replicas file was not present during 
> startup).
> I forced a FBR and made sure the datanode block reported to namenode.
> After waiting for sometime, still that block was missing.
> I expected the missing block to go away since the replica is in FINALIZED 
> directory.
> On investigating more, I found out that namenode will remove the replica from 
> corrupt map only if the reason for corruption was {{GENSTAMP_MISMATCH}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920846#comment-16920846
 ] 

Zhao Yi Ming edited comment on HDFS-14699 at 9/2/19 1:16 PM:
-

[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
{code:java}
      if (node.getNumberOfBlocksToBeReplicated() >= 
replicationStreamsHardLimit) {
        continue;
      }
{code}
 


was (Author: zhaoyim):
[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
{code:java}
// if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit)
{ continue; }
{code}
 

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920846#comment-16920846
 ] 

Zhao Yi Ming edited comment on HDFS-14699 at 9/2/19 1:16 PM:
-

[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
{code:java}
// if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit)
{ continue; }
{code}
 


was (Author: zhaoyim):
[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).

```
 if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit)

{ continue; }

```

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920846#comment-16920846
 ] 

Zhao Yi Ming edited comment on HDFS-14699 at 9/2/19 1:15 PM:
-

[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
 if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit) {   
 continue;
  }


was (Author: zhaoyim):
[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
 if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit) {
 continue;
 }

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920846#comment-16920846
 ] 

Zhao Yi Ming edited comment on HDFS-14699 at 9/2/19 1:15 PM:
-

[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).

```
 if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit)

{ continue; }

```


was (Author: zhaoyim):
[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
 if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit) {   
 continue;
  }

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920846#comment-16920846
 ] 

Zhao Yi Ming edited comment on HDFS-14699 at 9/2/19 1:13 PM:
-

[~ayushtkn] Thanks for your review! For The UT part, because I added the new 
test case  testChooseSrcDatanodesWithDupEC which is used to test my fix. If you 
do not apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
 if (node.getNumberOfBlocksToBeReplicated() >= replicationStreamsHardLimit) {
 continue;
 }


was (Author: zhaoyim):
[~ayushtkn] For The UT part, because I added the new test case  
testChooseSrcDatanodesWithDupEC which is used to test my fix. If you do not 
apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
  if (node.getNumberOfBlocksToBeReplicated() >= 
replicationStreamsHardLimit) \{
continue;
  }

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920846#comment-16920846
 ] 

Zhao Yi Ming commented on HDFS-14699:
-

[~ayushtkn] For The UT part, because I added the new test case  
testChooseSrcDatanodesWithDupEC which is used to test my fix. If you do not 
apply the patch, the new test case is not added, so the UT passed. 

Good point for the block index, I agree we do NOT need to recalculate the block 
index, I will try to fix this in next patch. But we can NOT put the 
liveBlockIndices.add(blockIndex) before following block, the reason is the EC 
reconstruction work will not be controlled by the replicationStreamsHardLimit 
configuration, if we move liveBlockIndices.add(blockIndex) before following 
block.  In this way it will introduce the DN high resource usage (CPU and 
Memory).
  if (node.getNumberOfBlocksToBeReplicated() >= 
replicationStreamsHardLimit) \{
continue;
  }

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14778) BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage state is failed

2019-09-02 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920842#comment-16920842
 ] 

hemanthboyina commented on HDFS-14778:
--

attached patch , please review [~jojochuang] [~surendrasingh]

> BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage 
> state is failed
> ---
>
> Key: HDFS-14778
> URL: https://issues.apache.org/jira/browse/HDFS-14778
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14778.001.patch
>
>
> Should not mark the block as corrupt if the storage state is failed



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-02 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14762:
-
Attachment: HDFS-14762.003.patch

> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14762.001.patch, HDFS-14762.002.patch, 
> HDFS-14762.003.patch
>
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14810) review FSNameSystem editlog sync

2019-09-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920829#comment-16920829
 ] 

Ayush Saxena commented on HDFS-14810:
-

Thanx [~hexiaoqiao] there are couple of failures and that too related to EC, 
but seems to be not related. I have retriggered the build anyway. Otherwise LGTM

> review FSNameSystem editlog sync
> 
>
> Key: HDFS-14810
> URL: https://issues.apache.org/jira/browse/HDFS-14810
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14810.001.patch, HDFS-14810.002.patch, 
> HDFS-14810.003.patch
>
>
> refactor and unified type of edit log sync in FSNamesystem as HDFS-11246 
> mentioned.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920827#comment-16920827
 ] 

Ayush Saxena commented on HDFS-14699:
-

[~zhaoyim] I am talking about the code. Let me try to be more clear.
* The unit test that you wrote, is to check the scenario that you are reporting 
and fixing. So ideally this UT should fail without your fix, and after your fix 
it should pass. So, The UT you wrote, passes instead of failing, if I remove 
your fix too. That means it doesn't verifies the scenario. You remove your fix 
and just put the UT and run, it passes, which ideally without your fix should 
fail.
* The if part I am talking about is :

{code:java}
  if(isStriped || srcNodes.isEmpty()) {
srcNodes.add(node);
if (isStriped) {
  byte blockIndex = ((BlockInfoStriped) block).
  getStorageBlockIndex(storage);
  liveBlockIndices.add(blockIndex);
  if (!bitSet.get(blockIndex)) {
bitSet.set(blockIndex);
  } else if (state == StoredReplicaState.LIVE) {
numReplicas.subtract(StoredReplicaState.LIVE, 1);
numReplicas.add(StoredReplicaState.REDUNDANT, 1);
  }
}
continue;
  }
{code}

You pulled up, a part of it leaving behind 
{{liveBlockIndices.add(blockIndex);}} for which we have to recalculate block 
Index. Can we not pull up the whole if block including these line also, above :

{code:java}
  if (node.getNumberOfBlocksToBeReplicated() >= 
replicationStreamsHardLimit) {
continue;
  }
{code}

Or you have left it below for some specific reason, if not, we can have the 
whole block above.


> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-02 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920809#comment-16920809
 ] 

Li Cheng edited comment on HDDS-1569 at 9/2/19 12:02 PM:
-

[~swagle]

One more question, what do you mean by "internal datastructures"? What data 
structures do you think datanodes (assuming DatanodeDetails) should be part of?


was (Author: timmylicheng):
[~swagle]

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-02 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920809#comment-16920809
 ] 

Li Cheng commented on HDDS-1569:


[~swagle]

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?focusedWorklogId=305143&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305143
 ]

ASF GitHub Bot logged work on HDDS-2057:


Author: ASF GitHub Bot
Created on: 02/Sep/19 11:34
Start Date: 02/Sep/19 11:34
Worklog Time Spent: 10m 
  Work Description: supratimdeka commented on pull request #1377: 
HDDS-2057. Incorrect Default OM Port in Ozone FS URI Error Message. Contributed 
by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1377#discussion_r319920566
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -87,11 +87,15 @@
   private static final Pattern URL_SCHEMA_PATTERN =
   Pattern.compile("([^\\.]+)\\.([^\\.]+)\\.{0,1}(.*)");
 
-  private static final String URI_EXCEPTION_TEXT = "Ozone file system URL " +
-  "should be one of the following formats: " +
-  "o3fs://bucket.volume/key  OR " +
-  "o3fs://bucket.volume.om-host.example.com/key  OR " +
-  "o3fs://bucket.volume.om-host.example.com:5678/key";
+  private String getUriExceptionText(Configuration conf) {
+final String URI_EXCEPTION_TEXT = "Ozone file system URL " +
+"should be one of the following formats: " +
+"o3fs://bucket.volume/key  OR " +
+"o3fs://bucket.volume.om-host.example.com/key  OR " +
+"o3fs://bucket.volume.om-host.example.com:" +
+OmUtils.getOmRpcPort(conf) + "/key";
 
 Review comment:
   will update the patch. thanks for the review!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305143)
Time Spent: 0.5h  (was: 20m)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?focusedWorklogId=305142&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-305142
 ]

ASF GitHub Bot logged work on HDDS-2057:


Author: ASF GitHub Bot
Created on: 02/Sep/19 11:34
Start Date: 02/Sep/19 11:34
Worklog Time Spent: 10m 
  Work Description: supratimdeka commented on pull request #1377: 
HDDS-2057. Incorrect Default OM Port in Ozone FS URI Error Message. Contributed 
by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1377#discussion_r319920566
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -87,11 +87,15 @@
   private static final Pattern URL_SCHEMA_PATTERN =
   Pattern.compile("([^\\.]+)\\.([^\\.]+)\\.{0,1}(.*)");
 
-  private static final String URI_EXCEPTION_TEXT = "Ozone file system URL " +
-  "should be one of the following formats: " +
-  "o3fs://bucket.volume/key  OR " +
-  "o3fs://bucket.volume.om-host.example.com/key  OR " +
-  "o3fs://bucket.volume.om-host.example.com:5678/key";
+  private String getUriExceptionText(Configuration conf) {
+final String URI_EXCEPTION_TEXT = "Ozone file system URL " +
+"should be one of the following formats: " +
+"o3fs://bucket.volume/key  OR " +
+"o3fs://bucket.volume.om-host.example.com/key  OR " +
+"o3fs://bucket.volume.om-host.example.com:" +
+OmUtils.getOmRpcPort(conf) + "/key";
 
 Review comment:
   will update the patch
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 305142)
Time Spent: 20m  (was: 10m)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-02 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920787#comment-16920787
 ] 

Li Cheng commented on HDDS-1569:


[~xyao] [~Sammi] [~swagle]

In terms of the 'SCMAllocationManager', do we need to support concurrent 
pipeline placement? Or we just support concurrent request and use a blocking 
queue to do sequential allocation?

One major cost for concurrent allocation is that info in datanode (like 
datanode <-> pipeline mapping) must have locks, which could add complexity to 
failure handling in pipeline placement since counts + 1 has to happen before 
ultimate placement success and the fallback process would require the lock 
again. There is absolutely some solution to it (like count + 1 upon registry 
and mapping + 1 upon placement success, which makes it only lock count + 1). 
I'm just having questions on whether it's necessary because usually these 
larger resource control plane action can be sequential as long as it's 
reasonably quick.

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13157) Do Not Remove Blocks Sequentially During Decommission

2019-09-02 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920783#comment-16920783
 ] 

Stephen O'Donnell edited comment on HDFS-13157 at 9/2/19 11:21 AM:
---

I tested my theory that this problem can also result in only 1 node making 
decommission progress when several are decommissioned at the same time. Using a 
simulated cluster with two carefully picked nodes, such that node 1 and node 2 
do not host any of the same blocks, I can see the first to start decommission 
makes progress while the other does not make any progress. In a real cluster, 
there is likely to be some overlap in the blocks between the two nodes, so both 
will make some progress, but this is only because the replication monitor 
notices 2 new replicas are needed for the block and schedules two new copies at 
the same time.

Therefore this problem is worse than concentrating decommission on a single 
disk, but it also does not really work on more than 1 node at a time.

I am also concerned about the time the NN lock is held when processing a node 
for decommission. In tests on my laptop, it takes about 300ms for a node with 
340K blocks and 660ms for 1M blocks. Scaling up to a 5M blocks this could hold 
the lock for about 3 seconds per node. There is a delay between each node, but 
it is still not ideal to block the NN for that long.

Randomizing the iterator in the way suggested here would prevent us from making 
a later change to drop and retake the NN lock per storage on the DN to improve 
the locking time.

This makes me think the solution to this problem is not to randomize the blocks 
from one node onto the replication queue, but instead to randomize the order 
the replication queue is processed somehow.


was (Author: sodonnell):
I tested my theory that this problem can also result in only 1 node making 
decommission progress when several are decommissioned at the same time. Using a 
simulated cluster with two carefully picked nodes, such that node 1 and node 2 
do not host any of the same blocks, I can see the first to start decommission 
makes progress while the other does not make any progress. In a real cluster, 
there is likely to be some overlap in the blocks between the two clusters, so 
both will make some progress, but this is only because the replication monitor 
notices 2 new replicas are needed for the block and schedules two new copies at 
the same time.

Therefore this problem is worse than concentrating decommission on a single 
disk, but it also does not really work on more than 1 node at a time.

I am also concerned about the time the NN lock is held when processing a node 
for decommission. In tests on my laptop, it takes about 300ms for a node with 
340K blocks and 660ms for 1M blocks. Scaling up to a 5M blocks this could hold 
the lock for about 3 seconds per node. There is a delay between each node, but 
it is still not ideal to block the NN for that long.

Randomizing the iterator in the way suggested here would prevent us from making 
a later change to drop and retake the NN lock per storage on the DN to improve 
the locking time.

This makes me think the solution to this problem is not to randomize the blocks 
from one node onto the replication queue, but instead to randomize the order 
the replication queue is processed somehow.

> Do Not Remove Blocks Sequentially During Decommission 
> --
>
> Key: HDFS-13157
> URL: https://issues.apache.org/jira/browse/HDFS-13157
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-13157.1.patch
>
>
> From what I understand of [DataNode 
> decommissioning|https://github.com/apache/hadoop/blob/42a1c98597e6dba2e371510a6b2b6b1fb94e4090/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java]
>  it appears that all the blocks are scheduled for removal _in order._. I'm 
> not 100% sure what the ordering is exactly, but I think it loops through each 
> data volume and schedules each block to be replicated elsewhere. The net 
> affect is that during a decommission, all of the DataNode transfer threads 
> slam on a single volume until it is cleaned out. At which point, they all 
> slam on the next volume, etc.
> Please randomize the block list so that there is a more even distribution 
> across all volumes when decommissioning a node.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs

[jira] [Commented] (HDFS-13157) Do Not Remove Blocks Sequentially During Decommission

2019-09-02 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920783#comment-16920783
 ] 

Stephen O'Donnell commented on HDFS-13157:
--

I tested my theory that this problem can also result in only 1 node making 
decommission progress when several are decommissioned at the same time. Using a 
simulated cluster with two carefully picked nodes, such that node 1 and node 2 
do not host any of the same blocks, I can see the first to start decommission 
makes progress while the other does not make any progress. In a real cluster, 
there is likely to be some overlap in the blocks between the two clusters, so 
both will make some progress, but this is only because the replication monitor 
notices 2 new replicas are needed for the block and schedules two new copies at 
the same time.

Therefore this problem is worse than concentrating decommission on a single 
disk, but it also does not really work on more than 1 node at a time.

I am also concerned about the time the NN lock is held when processing a node 
for decommission. In tests on my laptop, it takes about 300ms for a node with 
340K blocks and 660ms for 1M blocks. Scaling up to a 5M blocks this could hold 
the lock for about 3 seconds per node. There is a delay between each node, but 
it is still not ideal to block the NN for that long.

Randomizing the iterator in the way suggested here would prevent us from making 
a later change to drop and retake the NN lock per storage on the DN to improve 
the locking time.

This makes me think the solution to this problem is not to randomize the blocks 
from one node onto the replication queue, but instead to randomize the order 
the replication queue is processed somehow.

> Do Not Remove Blocks Sequentially During Decommission 
> --
>
> Key: HDFS-13157
> URL: https://issues.apache.org/jira/browse/HDFS-13157
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-13157.1.patch
>
>
> From what I understand of [DataNode 
> decommissioning|https://github.com/apache/hadoop/blob/42a1c98597e6dba2e371510a6b2b6b1fb94e4090/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminManager.java]
>  it appears that all the blocks are scheduled for removal _in order._. I'm 
> not 100% sure what the ordering is exactly, but I think it loops through each 
> data volume and schedules each block to be replicated elsewhere. The net 
> affect is that during a decommission, all of the DataNode transfer threads 
> slam on a single volume until it is cleaned out. At which point, they all 
> slam on the next volume, etc.
> Please randomize the block list so that there is a more even distribution 
> across all volumes when decommissioning a node.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14810) review FSNameSystem editlog sync

2019-09-02 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920758#comment-16920758
 ] 

Hadoop QA commented on HDFS-14810:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14810 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979092/HDFS-14810.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3e2100bc3d43 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f4d6e82 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27762/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Bui

[jira] [Commented] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-02 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920753#comment-16920753
 ] 

Li Cheng commented on HDDS-1569:


-- 'Define soft and hard upper bounds for pipeline membership'

Assuming it's talking about how many pipelines every datanode can be engaged 
in, it's defined as node heaviness in #HDDS-1577 as in pipeline placement 
policy. And the current default hard upper limit is 5. That's said, the max 
number of pipelines where every datanode can be placed may vary after 
sufficient field testing. 

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1577) Add default pipeline placement policy implementation

2019-09-02 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920750#comment-16920750
 ] 

Li Cheng commented on HDDS-1577:


#PR/1366 considers topology and node heaviness in terms of engagement in 
pipelines. May add more factors into datanode selection in pipeline placement. 
For now, these two major factors are considered enough.

> Add default pipeline placement policy implementation
> 
>
> Key: HDDS-1577
> URL: https://issues.apache.org/jira/browse/HDDS-1577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This is a simpler implementation of the PipelinePlacementPolicy that can be 
> utilized if no network topology is defined for the cluster. We try to form 
> pipelines from existing HEALTHY datanodes randomly, as long as they satisfy 
> PipelinePlacementCriteria.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14699) Erasure Coding: Can NOT trigger the reconstruction when have the dup internal blocks and missing one internal block

2019-09-02 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated HDFS-14699:

Attachment: (was: image-2019-09-02-17-49-24-286.png)

> Erasure Coding: Can NOT trigger the reconstruction when have the dup internal 
> blocks and missing one internal block
> ---
>
> Key: HDFS-14699
> URL: https://issues.apache.org/jira/browse/HDFS-14699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>  Labels: patch
> Attachments: HDFS-14699.00.patch, HDFS-14699.01.patch, 
> HDFS-14699.02.patch, HDFS-14699.03.patch, HDFS-14699.04.patch, 
> image-2019-08-20-19-58-51-872.png, image-2019-09-02-17-51-46-742.png
>
>
> We are tried the EC function on 80 node cluster with hadoop 3.1.1, we hit the 
> same scenario as you said https://issues.apache.org/jira/browse/HDFS-8881. 
> Following are our testing steps, hope it can helpful.(following DNs have the 
> testing internal blocks)
>  # we customized a new 10-2-1024k policy and use it on a path, now we have 12 
> internal block(12 live block)
>  # decommission one DN, after the decommission complete. now we have 13 
> internal block(12 live block and 1 decommission block)
>  # then shutdown one DN which did not have the same block id as 1 
> decommission block, now we have 12 internal block(11 live block and 1 
> decommission block)
>  # after wait for about 600s (before the heart beat come) commission the 
> decommissioned DN again, now we have 12 internal block(11 live block and 1 
> duplicate block)
>  # Then the EC is not reconstruct the missed block
> We think this is a critical issue for using the EC function in a production 
> env. Could you help? Thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >