[ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16238690#comment-16238690
 ] 

Hadoop QA commented on HDFS-12685:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
31s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:1 |
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | HDFS-12685 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895980/HDFS-12685-HDFS-9806.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 05683fdcd17a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-9806 / 365edb0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| Unreaped Processes Log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21954/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-reaper.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21954/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21954/testReport/ |
| Max. process+thread count | 2901 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21954/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [READ] FsVolumeImpl exception when scanning Provided storage volume
> -------------------------------------------------------------------
>
>                 Key: HDFS-12685
>                 URL: https://issues.apache.org/jira/browse/HDFS-12685
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Ewan Higgs
>            Assignee: Virajith Jalaparti
>            Priority: Major
>         Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29     
>                                                                               
>          
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"                                                      
>                                                                               
>      
>         at java.util.concurrent.FutureTask.report(FutureTask.java:122)        
>                                                                               
>                                                                               
>     
>         at java.util.concurrent.FutureTask.get(FutureTask.java:192)           
>                                                                               
>                                                                               
>     
>         at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>                                                                               
>                                                      
>         at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>                                                                               
>                                                               
>         at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>                                                                               
>                                                          
>         at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>                                                                               
>                                                                
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)       
>                                                                               
>                                                                        
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>                                                                               
>                                                                               
>     
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>                                                                               
>                                 
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>                                                                               
>                                        
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>                                                                               
>                                                                       
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>                                                                               
>                                                                       
>         at java.lang.Thread.run(Thread.java:748)                              
>                                                                               
>                                                                               
>     
> Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"       
>                                                                               
>                                                                               
>     
>         at java.io.File.<init>(File.java:421)                                 
>                                                                               
>                                                                               
>     
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi$ScanInfo.<init>(FsVolumeSpi.java:319)
>                                                                               
>                                                    
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ProvidedVolumeImpl$ProvidedBlockPoolSlice.compileReport(ProvidedVolumeImpl.java:155)
>                                                                               
>            
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ProvidedVolumeImpl.compileReport(ProvidedVolumeImpl.java:493)
>                                                                               
>                                   
>         at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:620)
>                                                                               
>                                                
>         at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:581)
>                                                                               
>                                                
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)           
>                                                                               
>                                                                               
>     
>         ... 3 more 
> {code}
> The code in question tries to make a File from the URI (in this case {{s3a}} 
> but anything in Provided storage would likely break here:
> {code}
>     public ScanInfo(long blockId, File blockFile, File metaFile,
>         FsVolumeSpi vol, FileRegion fileRegion, long length) {
>       this.blockId = blockId;
>       String condensedVolPath =
>           (vol == null || vol.getBaseURI() == null) ? null :
>             getCondensedPath(new File(vol.getBaseURI()).getAbsolutePath()); 
> // <-------
>       this.blockSuffix = blockFile == null ? null :
>         getSuffix(blockFile, condensedVolPath);
>       this.blockLength = length;
>       if (metaFile == null) {
>         this.metaSuffix = null;
>       } else if (blockFile == null) {
>         this.metaSuffix = getSuffix(metaFile, condensedVolPath);
>       } else {
>         this.metaSuffix = getSuffix(metaFile,
>             condensedVolPath + blockSuffix);
>       }
>       this.volume = vol;
>       this.fileRegion = fileRegion;
>     }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to