[ 
https://issues.apache.org/jira/browse/HDFS-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16263596#comment-16263596
 ] 

Hadoop QA commented on HDFS-12832:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
29s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 188 unchanged - 1 fixed = 188 total (was 189) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <<patch_file>>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
50s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:17 |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeInitStorage |
|   | hadoop.hdfs.server.datanode.TestRefreshNamenodes |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
| Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestHSync |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
|
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | org.apache.hadoop.hdfs.TestPread |
|   | org.apache.hadoop.hdfs.TestDecommission |
|   | org.apache.hadoop.hdfs.TestDFSAddressConfig |
|   | org.apache.hadoop.hdfs.TestDFSUpgrade |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | org.apache.hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | org.apache.hadoop.hdfs.TestDFSRollback |
|   | org.apache.hadoop.hdfs.server.datanode.TestBlockScanner |
|   | org.apache.hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | org.apache.hadoop.hdfs.TestAbandonBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:67e87c9 |
| JIRA Issue | HDFS-12832 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898948/HDFS-12832-branch-2.7.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 48182a918b6e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.7 / 0da13b9 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22172/artifact/out/whitespace-eol.txt
 |
| Unreaped Processes Log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22172/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22172/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22172/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22172/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 3747 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22172/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to 
> NameNode exit
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-12832
>                 URL: https://issues.apache.org/jira/browse/HDFS-12832
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.7.4, 3.0.0-beta1
>            Reporter: DENG FEI
>            Assignee: Konstantin Shvachko
>            Priority: Critical
>              Labels: release-blocker
>         Attachments: HDFS-12832-branch-2.002.patch, 
> HDFS-12832-branch-2.7.002.patch, HDFS-12832-trunk-001.patch, 
> HDFS-12832.002.patch, exception.log
>
>
> {code:title=INode.java|borderStyle=solid}
> public String getFullPathName() {
>     // Get the full path name of this inode.
>     if (isRoot()) {
>       return Path.SEPARATOR;
>     }
>     // compute size of needed bytes for the path
>     int idx = 0;
>     for (INode inode = this; inode != null; inode = inode.getParent()) {
>       // add component + delimiter (if not tail component)
>       idx += inode.getLocalNameBytes().length + (inode != this ? 1 : 0);
>     }
>     byte[] path = new byte[idx];
>     for (INode inode = this; inode != null; inode = inode.getParent()) {
>       if (inode != this) {
>         path[--idx] = Path.SEPARATOR_CHAR;
>       }
>       byte[] name = inode.getLocalNameBytes();
>       idx -= name.length;
>       System.arraycopy(name, 0, path, idx, name.length);
>     }
>     return DFSUtil.bytes2String(path);
>   }
> {code}
> We found ArrayIndexOutOfBoundsException at 
> _{color:#707070}System.arraycopy(name, 0, path, idx, name.length){color}_ 
> when ReplicaMonitor work ,and the NameNode will quit.
> It seems the two loop is not synchronized, the path's length is changed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to