In case anyone interested, I switched to TableOutputFormat to unblock
myself..

job.setOutputFormatClass(TableOutputFormat.class);

 job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, myTable);

 job.setOutputKeyClass(ImmutableBytesWritable.class);

 job.setOutputValueClass(Writable.class);

 job.setNumReduceTasks(0);

Chen


On Wed, Jun 18, 2014 at 12:40 AM, Ted Yu <yuzhih...@gmail.com> wrote:

> Have you asked this question on MapR mailing list ?
>
> Cheers
>
> On Jun 18, 2014, at 12:14 AM, Chen Wang <chen.apache.s...@gmail.com>
> wrote:
>
> > I actually tried that already, but it didn't work..I added
> >
> > <dependency>
> >
> > <groupId>org.apache.hbase</groupId>
> >
> > <artifactId>hbase</artifactId>
> >
> > <version>0.94.9-mapr-1308</version>
> >
> > </dependency>
> >
> > and removed the original hbase dependency..
> >
> >
> > On Wed, Jun 18, 2014 at 12:05 AM, Rabbit's Foot <
> rabbitsf...@is-land.com.tw>
> > wrote:
> >
> >> Maybe you can refer the Maven Repository and Artifacts for MapR
> >> <
> http://doc.mapr.com/display/MapR/Maven+Repository+and+Artifacts+for+MapR>
> >> to
> >> set pom
> >>
> >>
> >> 2014-06-18 13:33 GMT+08:00 Chen Wang <chen.apache.s...@gmail.com>:
> >>
> >>> Is this error indicating that I basically need a hbase mapr client?
> >>> currently my pom looks like this;
> >>>
> >>> <dependency>
> >>>
> >>> <groupId>org.apache.hadoop</groupId>
> >>>
> >>> <artifactId>hadoop-client</artifactId>
> >>>
> >>> <version>1.0.3</version>
> >>>
> >>> </dependency>
> >>>
> >>> <dependency>
> >>>
> >>> <groupId>org.apache.hadoop</groupId>
> >>>
> >>> <artifactId>hadoop-core</artifactId>
> >>>
> >>> <version>1.2.1</version>
> >>>
> >>> </dependency>
> >>>
> >>> <dependency>
> >>>
> >>> <groupId>org.apache.httpcomponents</groupId>
> >>>
> >>> <artifactId>httpclient</artifactId>
> >>>
> >>> <version>4.1.1</version>
> >>>
> >>> </dependency>
> >>>
> >>> <dependency>
> >>>
> >>> <groupId>com.google.code.gson</groupId>
> >>>
> >>> <artifactId>gson</artifactId>
> >>>
> >>> <version>2.2.4</version>
> >>>
> >>> </dependency>
> >>>
> >>>
> >>> <dependency>
> >>>
> >>> <groupId>org.apache.hbase</groupId>
> >>>
> >>> <artifactId>hbase</artifactId>
> >>>
> >>> <version>0.94.6.1</version>
> >>>
> >>> </dependency>
> >>>
> >>>
> >>> On Tue, Jun 17, 2014 at 10:04 PM, Chen Wang <
> chen.apache.s...@gmail.com>
> >>> wrote:
> >>>
> >>>> Yes, the hadoop cluster is using maprfs, so the hdfs files are are in
> >>>> maprfs:/ format:
> >>>>
> >>>>
> >>>> 2014-06-17 21:48:58 WARN:
> >>>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles - Skipping
> >>>> non-directory maprfs:/user/chen/hbase/_SUCCESS
> >>>> 2014-06-17 21:48:58 INFO: org.apache.hadoop.hbase.io.hfile.CacheConfig
> >> -
> >>>> Allocating LruBlockCache with maximum size 239.6m
> >>>> 2014-06-17 21:48:58 INFO: org.apache.hadoop.hbase.util.ChecksumType -
> >>>> Checksum using org.apache.hadoop.util.PureJavaCrc32
> >>>> 2014-06-17 21:48:58 INFO:
> >>>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles - Trying to
> >> load
> >>>> hfile=maprfs:/user/chen/hbase/m/cdd83ff3007b4955869d69c82a9f5b91
> >>> first=row1
> >>>> last=row9
> >>>>
> >>>> Chen
> >>>>
> >>>> On Tue, Jun 17, 2014 at 9:59 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> >>>>
> >>>>> The scheme says maprfs.
> >>>>> Do you happen to use MapR product ?
> >>>>>
> >>>>> Cheers
> >>>>>
> >>>>> On Jun 17, 2014, at 9:53 PM, Chen Wang <chen.apache.s...@gmail.com>
> >>>>> wrote:
> >>>>>
> >>>>>> Folk,
> >>>>>> I am trying to bulk load the hdfs file into hbase with
> >>>>>>
> >>>>>> LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf);
> >>>>>>
> >>>>>> loader.doBulkLoad(new Path(args[1]), hTable);
> >>>>>>
> >>>>>>
> >>>>>> However, i receive exception of java.io.IOException:
> >>>>> java.io.IOException:
> >>>>>> No FileSystem for scheme: maprfs
> >>>>>>
> >>>>>> Exception in thread "main" java.io.IOException: BulkLoad encountered
> >>> an
> >>>>>> unrecoverable problem
> >>>>>>
> >>>>>> at
> >>
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:331)
> >>>>>>
> >>>>>> at
> >>
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:261)
> >>>>>>
> >>>>>> at com.walmartlabs.targeting.mapred.Driver.main(Driver.java:81)
> >>>>>>
> >>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>>>>
> >>>>>> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>>>>>
> >>>>>> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>>>>>
> >>>>>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>>>>>
> >>>>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
> >>>>>>
> >>>>>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> >>>>> Failed
> >>>>>> after attempts=10, exceptions:
> >>>>>>
> >>>>>> Tue Jun 17 21:48:58 PDT 2014,
> >>>>>> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@482d59a3,
> >>>>>> java.io.IOException: java.io.IOException: No FileSystem for scheme:
> >>>>> maprfs
> >>>>>>
> >>>>>>
> >>>>>> What is the reason for this exception? I did some googling, and
> >> tried
> >>> to
> >>>>>> add some config to Hbase configuration:
> >>>>>>
> >>>>>> hbaseConf.set("fs.hdfs.impl",
> >>>>>>
> >>>>>> org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
> >>>>>>
> >>>>>> hbaseConf.set("fs.file.impl",
> >>>>>>
> >>>>>> org.apache.hadoop.fs.LocalFileSystem.class.getName());
> >>>>>>
> >>>>>>
> >>>>>> But it does not have any effect.
> >>>>>>
> >>>>>> Any idea?
> >>>>>>
> >>>>>> Thanks advance.
> >>>>>>
> >>>>>> Chen
> >>
>

Reply via email to