How are you submitting the job? How are your cluster configuration files deployed (i.e. are you using CM)?
On Wed, Nov 5, 2014 at 8:50 AM, Tim Robertson <[email protected]> wrote: > Hi all, > > I'm seeing the following > java.io.IOException: No FileSystem for scheme: file > at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) > ... > at > > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:778) > at > > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addHBaseDependencyJars(TableMapReduceUtil.java:707) > at > > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(TableMapReduceUtil.java:752) > at > > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:192) > .... > > I have the hadoop hdfs jar on the CP, and I am submitting using all > mapreduce2 jar (e.g. Yarn). I build a jar with dependencies (and also > provide the hdfs jar on the CP directly) and the dependency tree is: > https://gist.github.com/timrobertson100/027f97d038df53cc836f > > It has all worked before, but I'm doing a migration to Yarn and 0.98 (CDH > 5.2.0) from 0.94 and MR1 so obviously have messed up the CP somehow. > > Has anyone come across this please? > > Thanks, > Tim > -- Sean
