[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2020-01-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025336#comment-17025336
 ] 

Hudson commented on HDFS-14993:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17907 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17907/])
HDFS-14993. checkDiskError doesn't work during datanode startup. (ayushsaxena: 
rev 87c198468bb6a6312bbb27b174c18822b6b9ccf8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java


> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14993.patch, HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2020-01-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025322#comment-17025322
 ] 

Ayush Saxena commented on HDFS-14993:
-

Committed to trunk.

Thanx [~hadoop_yangyun]  for the contribution, [~sodonnell]  and [~weichiu] for 
the reviews!!!

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2020-01-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025293#comment-17025293
 ] 

Hadoop QA commented on HDFS-14993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-14993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987813/HDFS-14993.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 165cb4bdc954 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3f01c48 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28716/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28716/testReport/ |
| Max. process+thread count | 2947 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28716/console |
| Powered by | Apache Yetus 0.8.0  

[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2020-01-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025133#comment-17025133
 ] 

Ayush Saxena commented on HDFS-14993:
-

Thanx [~hadoop_yangyun]  for the patch. The latest patch LGTM

Have triggered build again, if all intact will push by EOD

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-12-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003180#comment-17003180
 ] 

Hadoop QA commented on HDFS-14993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-14993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987813/HDFS-14993.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1a94f599ce14 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df622cf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28568/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28568/testReport/ |
| Max. process+thread count | 3527 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| 

[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-12-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003125#comment-17003125
 ] 

Ayush Saxena commented on HDFS-14993:
-

Build results aren't available now.
I have retriggered it. If everything seems still fine. We can push this.

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-12-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990415#comment-16990415
 ] 

Hadoop QA commented on HDFS-14993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
20s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987813/HDFS-14993.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9b763282e232 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 11cd5b6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28480/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 

[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-12-06 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990370#comment-16990370
 ] 

Yang Yun commented on HDFS-14993:
-

Thanks [~ayushtkn]  and [~weichiu]  for the review. changed according to 
comments.

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-12-06 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990139#comment-16990139
 ] 

Wei-Chiu Chuang commented on HDFS-14993:


nit:
I would really love to use slf4j to log messages rather than using 
System.out.println in the tests. 
Other than that lgtm

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983264#comment-16983264
 ] 

Ayush Saxena commented on HDFS-14993:
-

Thanx [~hadoop_yangyun] for the patch. Seems fine.
Just to be sure.
[~vinayakumarb] can you too have a look, if it doesn't bother your fix in 
HDFS-5185

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-22 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980361#comment-16980361
 ] 

Stephen O'Donnell commented on HDFS-14993:
--

Good point - the checks do seem to be different.

As the base directory and the BlockPoolSlice directory must be on the same 
disk, I wonder if there is a need to check both? How likely is there to be a 
problem with BlockPoolSlice, but not with the base directory? I guess it could 
happen that someone adjusts the permissions somehow, or some other strange disk 
corruption occurs.

The first check in makeInstance should catch most failed disk scenarios and the 
further checkDiskError() would possibly catch some specific corruptions or 
permission issues, but due to this bug we never really had that check before.

I think there is still a case to remove the checkDiskError() call completely, 
but at the same time, the suggested patch LGTM and is an extra check, and the 
overhead of the extra check is not significant as it only happens on startup.

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-21 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979771#comment-16979771
 ] 

Yang Yun commented on HDFS-14993:
-

The check in makeInstance is for StorageLocation not for volume, it only checks 
the base diretory. if we set the diretoris of BlockPoolSlice to readonly, the 
disk error can't be found.

For example, set the finalized diretory as below, the dn can restarted withou 
any error.

chmod u-x 
/tmp/hadoop-yang/dfs/data/current/BP-1775391891-127.0.1.1-1574316846324/current/finalized

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-21 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979430#comment-16979430
 ] 

Stephen O'Donnell commented on HDFS-14993:
--

If I chown and chmod a directory so it is not writeable, then I get this stack 
on startup:

{code}
2019-11-21 15:52:44,383 INFO datanode.DataNode: registered UNIX signal handlers 
for [TERM, HUP, INT]
2019-11-21 15:52:44,529 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2019-11-21 15:52:44,647 INFO checker.ThrottledAsyncChecker: Scheduling a check 
for [DISK]file:/tmp/hadoop-sodonnell/dfs/data
2019-11-21 15:52:44,654 INFO checker.ThrottledAsyncChecker: Scheduling a check 
for [DISK]file:/tmp/hadoop-sodonnell/dfs/data2
2019-11-21 15:52:44,680 WARN checker.StorageLocationChecker: Exception checking 
StorageLocation [DISK]file:/tmp/hadoop-sodonnell/dfs/data
ExitCodeException exitCode=1: chmod: Unable to change file mode on 
/private/tmp/hadoop-sodonnell/dfs/data: Operation not permitted

at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
at org.apache.hadoop.util.Shell.run(Shell.java:901)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
at 
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:867)
at 
org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:550)
at 
org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:531)
at 
org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:553)
at 
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:234)
at 
org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
at 
org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
at 
org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-11-21 15:52:44,682 ERROR datanode.DataNode: Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes 
- current valid volumes: 1, volumes configured: 2, volumes failed: 1, volume 
failures tolerated: 0
at 
org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:233)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2836)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2749)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2793)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2937)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2961)
2019-11-21 15:52:44,683 INFO util.ExitUtil: Exiting with status 1: 
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes 
- current valid volumes: 1, volumes configured: 2, volumes failed: 1, volume 
failures tolerated: 0
{code}

So it looks like a disk check is scheduled already by 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2836)
 which finds its way into DiskChecker:

{code}
  private static void checkDirInternal(LocalFileSystem localFS, Path dir,
   FsPermission expected)
  throws DiskErrorException, IOException {
mkdirsWithExistsAndPermissionCheck(localFS, dir, expected);
checkAccessByFileMethods(localFS.pathToFile(dir));
  }
{code}

The call checkAccessByFileMethods then runs a read and write test.

Rather than moving the call to checkDiskError() after the addBlockPool() call, 
I wonder if we need it there at all?

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: 

[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-21 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979373#comment-16979373
 ] 

Stephen O'Donnell commented on HDFS-14993:
--

It does look like this has always been broken. I don't think HDFS-14333 changed 
anything related to this checkdisk() method.

The call "data.addBlockPool()" will initiate all the blockpool slices, and scan 
all the disks looking for the replicas. Any disks that throw an error there 
will be caught and handed here:

{code}
checkDiskError(); // This the original, pre-patch code
 try {
   data.addBlockPool(nsInfo.getBlockPoolID(), getConf());
 } catch (AddBlockPoolException e) {
   handleAddBlockPoolError(e);
 }
   ...
{code}

If we get there with no errors, then all the disks must at least be readable. 
Any errors are caught and the volumes are failed handled via 
handleAddBlockPoolError().

What I am not sure about, is if the code to load the block pool ever writes 
anything to disk, so the disks may be read only and the checkDiskError() call 
will check that. I am pretty sure I have seen DNs fail at startup due because 
of too many read only disk error, so I wonder if it is getting checked 
somewhere. I will try running a quick test, setting a directory read only.

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978973#comment-16978973
 ] 

Wei-Chiu Chuang commented on HDFS-14993:


[~sodonnell] please take a look. Perhaps the logic was changed after HDFS-14333.

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-19 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978020#comment-16978020
 ] 

Yang Yun commented on HDFS-14993:
-

It looks don't work from start. 
>From current code it doesn't bother the fix by HDFS-5185? .  if fail to mkdir 
>dir or other disk error, will through a AddBlockPoolException, datanode can 
>handle it and continue to start-up with other data dirs available..

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-19 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977542#comment-16977542
 ] 

Ayush Saxena commented on HDFS-14993:
-

Is this like this from start or got broken recently, if so can you link the 
related JIRA?

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-19 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977456#comment-16977456
 ] 

Hadoop QA commented on HDFS-14993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 29m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 207 unchanged - 0 fixed = 209 total (was 207) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
56s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}226m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.TestLargeDirectoryDelete |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.namenode.TestAclConfigFlag |
|   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.fs.viewfs.TestViewFsDefaultValue |
|   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.fs.viewfs.TestViewFileSystemLinkFallback |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.TestFileChecksumCompositeCrc |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | 

[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-19 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977429#comment-16977429
 ] 

Hadoop QA commented on HDFS-14993:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 207 unchanged - 0 fixed = 209 total (was 207) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986212/HDFS-14993.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c975aa425605 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3cecb2a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28335/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-19 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977267#comment-16977267
 ] 

Ayush Saxena commented on HDFS-14993:
-

Thanx [~hadoop_yangyun] for the report, added you as HDFS Contributor, Assigned 
the ticket and submitted patch to trigger Jenkins!!!

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org