Hi,

I'm migrating from HBase 0.19 to version 0.20 and facing an error regarding
the TableInputFormat class. Bellow is how I'm setting up the job and also
the error message I'm getting.

Does anybody have a clue on what may be happening? It used to work on HBase
0.19.

Lucas


this.configuration.set(TableInputFormat.INPUT_TABLE, args[0]);
this.configuration.set(TableInputFormat.SCAN, "date");
this.configuration.set("index.name", args[1]);
this.configuration.set("hbase.master", args[2]);
this.configuration.set("index.replication.level", args[3]);

final Job jobConf = new Job(this.configuration);
jobConf.setJarByClass(Indexer.class);
jobConf.setJobName("NInvestNewsIndexer");

FileInputFormat.setInputPaths(jobConf, new Path(args[0]));

jobConf.setInputFormatClass(TableInputFormat.class);
jobConf.setOutputFormatClass(NullOutputFormat.class);

jobConf.setOutputKeyClass(Text.class);
jobConf.setOutputValueClass(Text.class);

jobConf.setMapperClass(MapChangedTableRowsIntoUrls.class);
jobConf.setReducerClass(ReduceUrlsToLuceneIndexIntoKatta.class);




09/08/03 18:19:19 ERROR mapreduce.TableInputFormat: An error occurred.
java.io.EOFException
        at java.io.DataInputStream.readFully(DataInputStream.java:180)
        at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:135)
        at org.apache.hadoop.hbase.client.Scan.readFields(Scan.java:493)
        at
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertStringToScan(TableMapReduceUtil.java:94)
        at
org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:79)
        at
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
        at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
        at
org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:882)
        at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
        at com.nash.ninvest.index.indexer.Indexer.run(Unknown Source)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at com.nash.ninvest.index.indexer.Indexer.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Exception in thread "main" java.lang.NullPointerException
        at
org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:280)
        at
org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
        at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
        at com.nash.ninvest.index.indexer.Indexer.run(Unknown Source)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at com.nash.ninvest.index.indexer.Indexer.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Reply via email to