[ https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273149#comment-16273149 ]
Virajith Jalaparti commented on HDFS-12685: ------------------------------------------- Thanks for taking a look [~ehiggs]. I committed v4 to the feature branch. > [READ] FsVolumeImpl exception when scanning Provided storage volume > ------------------------------------------------------------------- > > Key: HDFS-12685 > URL: https://issues.apache.org/jira/browse/HDFS-12685 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Ewan Higgs > Assignee: Virajith Jalaparti > Attachments: HDFS-12685-HDFS-9806.001.patch, > HDFS-12685-HDFS-9806.002.patch, HDFS-12685-HDFS-9806.003.patch, > HDFS-12685-HDFS-9806.004.patch > > > I left a Datanode running overnight and found this in the logs in the morning: > {code} > 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling > report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 > > > java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: > URI scheme is not "file" > > > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > > > > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > > > > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544) > > > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393) > > > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375) > > > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320) > > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > > > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > > at java.lang.Thread.run(Thread.java:748) > > > > Caused by: java.lang.IllegalArgumentException: URI scheme is not "file" > > > > at java.io.File.<init>(File.java:421) > > > > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi$ScanInfo.<init>(FsVolumeSpi.java:319) > > > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ProvidedVolumeImpl$ProvidedBlockPoolSlice.compileReport(ProvidedVolumeImpl.java:155) > > > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ProvidedVolumeImpl.compileReport(ProvidedVolumeImpl.java:493) > > > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:620) > > > at > org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:581) > > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > > > > ... 3 more > {code} > The code in question tries to make a File from the URI (in this case {{s3a}} > but anything in Provided storage would likely break here: > {code} > public ScanInfo(long blockId, File blockFile, File metaFile, > FsVolumeSpi vol, FileRegion fileRegion, long length) { > this.blockId = blockId; > String condensedVolPath = > (vol == null || vol.getBaseURI() == null) ? null : > getCondensedPath(new File(vol.getBaseURI()).getAbsolutePath()); > // <------- > this.blockSuffix = blockFile == null ? null : > getSuffix(blockFile, condensedVolPath); > this.blockLength = length; > if (metaFile == null) { > this.metaSuffix = null; > } else if (blockFile == null) { > this.metaSuffix = getSuffix(metaFile, condensedVolPath); > } else { > this.metaSuffix = getSuffix(metaFile, > condensedVolPath + blockSuffix); > } > this.volume = vol; > this.fileRegion = fileRegion; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org