hi i am running  HBase 0.94.20  on Hadoop 2.2.0

i am using MultiTableOutputFormat,
for writing processed output to two different tables in hbase.

here's the code snippet

private ImmutableBytesWritable tab_cr = new ImmutableBytesWritable(
Bytes.toBytes("i1")); private ImmutableBytesWritable tab_cvs = new
ImmutableBytesWritable( Bytes.toBytes("i2"));

@Override
public void map(ImmutableBytesWritable row, final Result value,
final Context context) throws IOException, InterruptedException {

-----------------------------------------
Put pcvs = new Put(entry.getKey().getBytes());
pcvs.add("cf".getBytes(),"type".getBytes(),column.getBytes());
Put put = new Put(value.getRow());
put.add("Entity".getBytes(), "json".getBytes(),
entry.getValue().getBytes());
context.write(tab_cr, put);// table i1 context.write(tab_cvs, pcvs);//table
i2

}

job.setJarByClass(EntitySearcherMR.class);
job.setMapperClass(EntitySearcherMapper.class);
job.setOutputFormatClass(MultiTableOutputFormat.class); Scan scan = new
Scan(); scan.setCacheBlocks(false);
TableMapReduceUtil.initTableMapperJob(otherArgs[0], scan,
EntitySearcherMapper.class, ImmutableBytesWritable.class, Put.class,
job);//otherArgs[0]=i1 TableMapReduceUtil.initTableReducerJob(otherArgs[0],
null, job); job.setNumReduceTasks(0);

mapreduce job fails by saying nosuchcolumnfamily "cf" exception, in table i1
i am writing data to two different columnfamilies one in each table, cf
belongs to table i2.
does the columnfamilies should present in both tables??
is there anything i am missing
can someone point me in the right direction

thanks,
yeshwanth.

Reply via email to