hi i am running HBase 0.94.20 on Hadoop 2.2.0
i am using MultiTableOutputFormat,
for writing processed output to two different tables in hbase.
here's the code snippet
private ImmutableBytesWritable tab_cr = new ImmutableBytesWritable(
Bytes.toBytes(i1)); private ImmutableBytesWritable
TableMapReduceUtil.initTableMapperJob(otherArgs[0], scan,
EntitySearcherMapper.class, ImmutableBytesWritable.class, Put.class,
job);//otherArgs[0]=i1
You're initializing with table 'i1'
Please remove the above call and try again.
Cheers
On Tue, Aug 26, 2014 at 9:18 AM, yeshwanth kumar
hi ted,
how can we intialise the mapper if i comment out those lines
On Tue, Aug 26, 2014 at 10:08 PM, Ted Yu yuzhih...@gmail.com wrote:
TableMapReduceUtil.initTableMapperJob(otherArgs[0], scan,
EntitySearcherMapper.class, ImmutableBytesWritable.class, Put.class,
job);//otherArgs[0]=i1
Please take a look at WALPlayer.java in hbase where you can find example of
how MultiTableOutputFormat is used.
Cheers
On Tue, Aug 26, 2014 at 10:04 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi ted,
how can we intialise the mapper if i comment out those lines
On Tue, Aug 26, 2014
hi ted,
i need to process the data in table i1, and then i need to write the
results to tables i1 and i2
so input for the mapper in my mapreduce job is from hbase table, i1
whereas in WALPlayer input is HLogInputFormat,
if i remove the statement as you said and specify the inputformat
as
You don't need to initialize the tables.
You just need to specify the output format as MultipleTableOutputFormat
class.
Something like this:
job.setOutputFormatClass(MultipleTableOutputFormat.class);
Because if you see the code for MultipleTableOutputFormat, it creates the
table on the fly and
hi shahab,
i tried in that way, by specifying outputformat as MultiTableOutputFormat,
it is throwing
java.io.IOException: No input paths specified in job
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:193)
at
Where are you setting the input data/path/format for the job? I don't see
that in the code below that you just pasted...
*Job job = new Job(config,
that mapreduce job reads data from hbase table,
it doesn't take any explicit input data/file/
-yeshwanth
On Wed, Aug 27, 2014 at 12:44 AM, Shahab Yunus shahab.yu...@gmail.com
wrote:
Where are you setting the input data/path/format for the job? I don't see
that in the code below that you just
So are you calling or making the following call with the right
parameters,specifying the table to read from?
TableMapReduceUtil.initTableMapperJob
Can you should your whole job setup/driver code?
Regards,
Shahab
On Tue, Aug 26, 2014 at 3:18 PM, yeshwanth kumar yeshwant...@gmail.com
wrote:
i was doing that earlier
TableMapReduceUtil.initTableMapperJob(otherArgs[0], scan,
EntitySearcherMapper.class, ImmutableBytesWritable.class, Put.class,
job);//otherArgs[0]=i1 TableMapReduceUtil.initTableReducerJob(otherArgs[0],
null, job);
Ted suggested to remove them,
if u see the first message
Ted suggested to remove the following call
TableMapReduceUtil.initTableReducerJob(otherArgs[0],
null, job);
You are doing 2 things in your earlier code snippet
*TableMapReduceUtil.initTableMapperJob(otherArgs[0],
scan,EntitySearcherMapper.class, ImmutableBytesWritable.class,
Put.class,job);*
my bad,
that was the issue,
thanks for helping me out
-yeshwanth
On Wed, Aug 27, 2014 at 1:12 AM, Shahab Yunus shahab.yu...@gmail.com
wrote:
Ted suggested to remove the following call
TableMapReduceUtil.initTableReducerJob(otherArgs[0],
null, job);
You are doing 2 things in your earlier
13 matches
Mail list logo