Hello!

2014-11-02 Thread jackie
Hello!

how to design hbase schema?

2014-11-02 Thread jackie
Hi!
  I have data warehouse application(base on oracle database), i want to 
transfer it to hbase database ,how to design hbase tables especially 1:n or n:m 
relationship  in oracle database !


 Thank U very much!


jackie









Re: how to design hbase schema?

2014-11-02 Thread Krishna Kalyan
Some Resources
http://hbase.apache.org/book/schema.casestudies.html
http://www.slideshare.net/cloudera/5-h-base-schemahbasecon2012
http://www.evanconkle.com/2011/11/hbase-tutorial-creating-table/
http://www.slideshare.net/hmisty/20090713-hbase-schema-design-case-studies

On Sun, Nov 2, 2014 at 6:53 PM, jackie jackiehbaseu...@126.com wrote:

 Hi!
   I have data warehouse application(base on oracle database), i want
 to transfer it to hbase database ,how to design hbase tables especially 1:n
 or n:m relationship  in oracle database !


  Thank U very much!


 jackie










Re: how to design hbase schema?

2014-11-02 Thread Ted Yu
Please also consider phoenix.apache.org

Cheers

On Nov 2, 2014, at 5:43 AM, Krishna Kalyan krishnakaly...@gmail.com wrote:

 Some Resources
 http://hbase.apache.org/book/schema.casestudies.html
 http://www.slideshare.net/cloudera/5-h-base-schemahbasecon2012
 http://www.evanconkle.com/2011/11/hbase-tutorial-creating-table/
 http://www.slideshare.net/hmisty/20090713-hbase-schema-design-case-studies
 
 On Sun, Nov 2, 2014 at 6:53 PM, jackie jackiehbaseu...@126.com wrote:
 
 Hi!
  I have data warehouse application(base on oracle database), i want
 to transfer it to hbase database ,how to design hbase tables especially 1:n
 or n:m relationship  in oracle database !
 
 
 Thank U very much!
 
 
jackie
 
 
 
 
 
 
 
 


Re:Re: how to design hbase schema?

2014-11-02 Thread jackie
Thank u  very much!








At 2014-11-02 21:49:49, Ted Yu yuzhih...@gmail.com wrote:
Please also consider phoenix.apache.org

Cheers

On Nov 2, 2014, at 5:43 AM, Krishna Kalyan krishnakaly...@gmail.com wrote:

 Some Resources
 http://hbase.apache.org/book/schema.casestudies.html
 http://www.slideshare.net/cloudera/5-h-base-schemahbasecon2012
 http://www.evanconkle.com/2011/11/hbase-tutorial-creating-table/
 http://www.slideshare.net/hmisty/20090713-hbase-schema-design-case-studies
 
 On Sun, Nov 2, 2014 at 6:53 PM, jackie jackiehbaseu...@126.com wrote:
 
 Hi!
  I have data warehouse application(base on oracle database), i want
 to transfer it to hbase database ,how to design hbase tables especially 1:n
 or n:m relationship  in oracle database !
 
 
 Thank U very much!
 
 
jackie
 
 
 
 
 
 
 
 


Re: Hello!

2014-11-02 Thread Chandrashekhar Kotekar
Hello jackie.. looks like u have joined mailing list just now :D


Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455

On Sun, Nov 2, 2014 at 6:52 PM, jackie jackiehbaseu...@126.com wrote:

 Hello!


error in starting hbase

2014-11-02 Thread beeshma r
HI

When i start hbase fallowing error is occurred .How to  solve this? i
haven't add any zokeeper path anywhere?

Please suggest this.

2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:java.io.tmpdir=/tmp
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:java.compiler=NA
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.name=Linux
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.arch=amd64
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.version=3.11.0-12-generic
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:user.name=beeshma
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:user.home=/home/beeshma
2014-11-01 20:01:51,197 INFO  [main] server.ZooKeeperServer: Server
environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2/bin
2014-11-01 20:01:51,202 ERROR [main] master.HMasterCommandLine: Master
exiting
java.io.IOException: Unable to create data directory
/home/beesh_hadoop2/zookeeper/zookeeper_0/version-2
at
org.apache.zookeeper.server.persistence.FileTxnSnapLog.init(FileTxnSnapLog.java:85)
at
org.apache.zookeeper.server.ZooKeeperServer.init(ZooKeeperServer.java:213)
at
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:162)
at
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:131)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:165)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)


Re:error in starting hbase

2014-11-02 Thread jackie
please check the hbase conf and zookeeper!








At 2014-11-02 22:41:02, beeshma r beeshm...@gmail.com wrote:
HI

When i start hbase fallowing error is occurred .How to  solve this? i
haven't add any zokeeper path anywhere?

Please suggest this.

2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:java.io.tmpdir=/tmp
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:java.compiler=NA
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.name=Linux
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.arch=amd64
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.version=3.11.0-12-generic
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:user.name=beeshma
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:user.home=/home/beeshma
2014-11-01 20:01:51,197 INFO  [main] server.ZooKeeperServer: Server
environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2/bin
2014-11-01 20:01:51,202 ERROR [main] master.HMasterCommandLine: Master
exiting
java.io.IOException: Unable to create data directory
/home/beesh_hadoop2/zookeeper/zookeeper_0/version-2
at
org.apache.zookeeper.server.persistence.FileTxnSnapLog.init(FileTxnSnapLog.java:85)
at
org.apache.zookeeper.server.ZooKeeperServer.init(ZooKeeperServer.java:213)
at
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:162)
at
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:131)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:165)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)


Re: Hello!

2014-11-02 Thread Ravindranath Akila
More like spam judging by the mail id.

On Sunday, November 2, 2014, Chandrashekhar Kotekar 
shekhar.kote...@gmail.com wrote:

 Hello jackie.. looks like u have joined mailing list just now :D


 Regards,
 Chandrash3khar Kotekar
 Mobile - +91 8600011455

 On Sun, Nov 2, 2014 at 6:52 PM, jackie jackiehbaseu...@126.com
 javascript:; wrote:

  Hello!



-- 
R. A.
BTW, there is a website called* Thank God it's Friday!*
It tells you fun things to do in your area over the weekend.
*See here: http://www.ThankGodItIsFriday.com
http://www.ThankGodItIsFriday.com*


Re: error in starting hbase

2014-11-02 Thread Ted Yu
Are you running hbase in standalone mode ?

See http://hbase.apache.org/book.html#zookeeper

bq. To toggle HBase management of ZooKeeper, use the HBASE_MANAGES_ZK variable
in conf/hbase-env.sh.

Cheers

On Sun, Nov 2, 2014 at 6:41 AM, beeshma r beeshm...@gmail.com wrote:

 HI

 When i start hbase fallowing error is occurred .How to  solve this? i
 haven't add any zokeeper path anywhere?

 Please suggest this.

 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
 environment:java.io.tmpdir=/tmp
 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
 environment:java.compiler=NA
 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
 environment:os.name=Linux
 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
 environment:os.arch=amd64
 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
 environment:os.version=3.11.0-12-generic
 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
 environment:user.name=beeshma
 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
 environment:user.home=/home/beeshma
 2014-11-01 20:01:51,197 INFO  [main] server.ZooKeeperServer: Server
 environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2/bin
 2014-11-01 20:01:51,202 ERROR [main] master.HMasterCommandLine: Master
 exiting
 java.io.IOException: Unable to create data directory
 /home/beesh_hadoop2/zookeeper/zookeeper_0/version-2
 at

 org.apache.zookeeper.server.persistence.FileTxnSnapLog.init(FileTxnSnapLog.java:85)
 at

 org.apache.zookeeper.server.ZooKeeperServer.init(ZooKeeperServer.java:213)
 at

 org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:162)
 at

 org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:131)
 at

 org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:165)
 at

 org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at

 org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
 at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)



Re: error in starting hbase

2014-11-02 Thread beeshma r
Hi Ted,

Thanks for your reply. Yes i am running  standalone mode
After changing my zookeeper property its resolved .And now i have another
two issues .

2014-11-02 07:06:32,948 DEBUG [main] master.HMaster:
master/ubuntu.ubuntu-domain/127.0.1.1:0 HConnection server-to-server
retries=350
2014-11-02 07:06:33,458 INFO  [main] ipc.RpcServer:
master/ubuntu.ubuntu-domain/127.0.1.1:0: started 10 reader(s).
2014-11-02 07:06:33,670 INFO  [main] impl.MetricsConfig: loaded properties
from hadoop-metrics2-hbase.properties
2014-11-02 07:06:33,766 INFO  [main] impl.MetricsSystemImpl: Scheduled
snapshot period at 10 second(s).
2014-11-02 07:06:33,766 INFO  [main] impl.MetricsSystemImpl: HBase metrics
system started
2014-11-02 07:06:34,592 ERROR [main] master.HMasterCommandLine: Master
exiting
java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMasternull
at
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:140)
at
org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:202)
at
org.apache.hadoop.hbase.LocalHBaseCluster.init(LocalHBaseCluster.java:152)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:179)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.init(Groups.java:55)
at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182)
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235)
at
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214)
at
org.apache.hadoop.security.UserGroupInformation.isAuthenticationMethodEnabled(UserGroupInformation.java:275)

--

And when i create table

hbase(main):001:0 create 't1','e1'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/home/beeshma/hbase-0.98.6.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/beeshma/hadoop-1.2.1/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

ERROR: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V

-

 hbase(main):002:0 list
TABLE


ERROR: Could not initialize class
org.apache.hadoop.security.JniBasedUnixGroupsMapping

Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

  hbase list
  hbase list 'abc.*'
  hbase list 'ns:abc.*'
  hbase list 'ns:.*'


hbase(main):003:0 beeshma@ubuntu:~/hbase-0.98.6.1-hadoop2/bin$



On Sun, Nov 2, 2014 at 7:01 AM, Ted Yu yuzhih...@gmail.com wrote:

 Are you running hbase in standalone mode ?

 See http://hbase.apache.org/book.html#zookeeper

 bq. To toggle HBase management of ZooKeeper, use the HBASE_MANAGES_ZK
 variable
 in conf/hbase-env.sh.

 Cheers

 On Sun, Nov 2, 2014 at 6:41 AM, beeshma r beeshm...@gmail.com wrote:

  HI
 
  When i start hbase fallowing error is occurred .How to  solve this? i
  haven't add any zokeeper path anywhere?
 
  Please suggest this.
 
  2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
  environment:java.io.tmpdir=/tmp
  2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
  environment:java.compiler=NA
  2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
  environment:os.name=Linux
  2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
  environment:os.arch=amd64
  2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
  environment:os.version=3.11.0-12-generic
  2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
  environment:user.name=beeshma
  2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
  environment:user.home=/home/beeshma
  2014-11-01 20:01:51,197 INFO  [main] server.ZooKeeperServer: Server
  environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2/bin
  2014-11-01 20:01:51,202 ERROR [main] master.HMasterCommandLine: Master
  exiting
  java.io.IOException: Unable to create data directory
  /home/beesh_hadoop2/zookeeper/zookeeper_0/version-2
  at
 
 
 org.apache.zookeeper.server.persistence.FileTxnSnapLog.init(FileTxnSnapLog.java:85)
  at
 
 
 

It is possible to indicate a specific key range into a specific node?

2014-11-02 Thread yonghu
Dear All,

Suppose that I have a key range from 1 to 100 and want to store 1-50 in the
first node and 51-100 in the second node. How can I do this in Hbase?

regards!

Yong


Re: It is possible to indicate a specific key range into a specific node?

2014-11-02 Thread Dima Spivak
Hi Yong,

Check out http://hbase.apache.org/book/perf.writing.html to learn about
pre-splitting regions at table creation time. Beyond this, HBase internally
handles which RegionServers serve any particular region.

All the best,
   Dima

On Sun, Nov 2, 2014 at 11:32 AM, yonghu yongyong...@gmail.com wrote:

 Dear All,

 Suppose that I have a key range from 1 to 100 and want to store 1-50 in the
 first node and 51-100 in the second node. How can I do this in Hbase?

 regards!

 Yong



Re: It is possible to indicate a specific key range into a specific node?

2014-11-02 Thread Dima Spivak
Oops, accidentally cut out my last sentence:

If you do want to move a region manually, the simplest way to do this is by
invoking move in the HBase shell.

-Dima

On Sun, Nov 2, 2014 at 11:51 AM, Dima Spivak dspi...@cloudera.com wrote:

 Hi Yong,

 Check out http://hbase.apache.org/book/perf.writing.html to learn about
 pre-splitting regions at table creation time. Beyond this, HBase internally
 handles which RegionServers serve any particular region.

 All the best,
Dima

 On Sun, Nov 2, 2014 at 11:32 AM, yonghu yongyong...@gmail.com wrote:

 Dear All,

 Suppose that I have a key range from 1 to 100 and want to store 1-50 in
 the
 first node and 51-100 in the second node. How can I do this in Hbase?

 regards!

 Yong





No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Serega Sheypak
Hi, I'm migrating from CDH4 to CDH5 (hbase 0.98.6-cdh5.2.0)
I had a unit test for mapper used to create HFile and bulk load later.

I've bumped maven deps from cdh4 to cdh5 0.98.6-cdh5.2.0
Now I've started to get exception

java.lang.IllegalStateException: No applicable class implementing
Serialization in conf at io.serializations: class
org.apache.hadoop.hbase.client.Put
at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at
org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:75)
at
org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:97)
at
org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
at
org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
at
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
at
org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
at
org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
at
org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
at
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:51)
at
org.apache.hadoop.mapreduce.Mapper$Context$$EnhancerByMockitoWithCGLIB$$ba4633fb.write(generated)


And here is mapper code:



public class ItemRecommendationHBaseMapper extends MapperLongWritable,
BytesWritable, ImmutableBytesWritable, Put {

private final ImmutableBytesWritable hbaseKey = new
ImmutableBytesWritable();
private final DynamicObjectSerDeItemRecommendation serde = new
DynamicObjectSerDeItemRecommendation(ItemRecommendation.class);

@Override
protected void map(LongWritable key, BytesWritable value, Context
context) throws IOException, InterruptedException {
checkPreconditions(key, value);
hbaseKey.set(Bytes.toBytes(key.get()));

ItemRecommendation item = serde.deserialize(value.getBytes());
checkPreconditions(item);
Put put = PutFactory.createPut(serde, item, getColumnFamily());

context.write(hbaseKey, put); //Exception here
}

Whatcan i do in order to make unit-test pass?


Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Ted Yu
bq. PutFactory.createPut(

Can you reveal how PutFactory creates the Put ?

Thanks

On Sun, Nov 2, 2014 at 1:02 PM, Serega Sheypak serega.shey...@gmail.com
wrote:

 Hi, I'm migrating from CDH4 to CDH5 (hbase 0.98.6-cdh5.2.0)
 I had a unit test for mapper used to create HFile and bulk load later.

 I've bumped maven deps from cdh4 to cdh5 0.98.6-cdh5.2.0
 Now I've started to get exception

 java.lang.IllegalStateException: No applicable class implementing
 Serialization in conf at io.serializations: class
 org.apache.hadoop.hbase.client.Put
 at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
 at

 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:75)
 at

 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:97)
 at

 org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
 at

 org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
 at

 org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
 at

 org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
 at

 org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
 at

 org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
 at

 org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:51)
 at

 org.apache.hadoop.mapreduce.Mapper$Context$$EnhancerByMockitoWithCGLIB$$ba4633fb.write(generated)


 And here is mapper code:



 public class ItemRecommendationHBaseMapper extends MapperLongWritable,
 BytesWritable, ImmutableBytesWritable, Put {

 private final ImmutableBytesWritable hbaseKey = new
 ImmutableBytesWritable();
 private final DynamicObjectSerDeItemRecommendation serde = new
 DynamicObjectSerDeItemRecommendation(ItemRecommendation.class);

 @Override
 protected void map(LongWritable key, BytesWritable value, Context
 context) throws IOException, InterruptedException {
 checkPreconditions(key, value);
 hbaseKey.set(Bytes.toBytes(key.get()));

 ItemRecommendation item = serde.deserialize(value.getBytes());
 checkPreconditions(item);
 Put put = PutFactory.createPut(serde, item, getColumnFamily());

 context.write(hbaseKey, put); //Exception here
 }

 Whatcan i do in order to make unit-test pass?



Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Serega Sheypak
 public static Put createPut(DynamicObjectSerDeItemRecommendation serde,
ItemRecommendation item, String columnFamily){
Put put = new Put(Bytes.toBytes(Long.valueOf(item.getId(;
put.add(Bytes.toBytes(columnFamily), Bytes.toBytes(item.getRank()),
serde.serialize(item));
return put;
}

2014-11-03 0:12 GMT+03:00 Ted Yu yuzhih...@gmail.com:

 bq. PutFactory.createPut(

 Can you reveal how PutFactory creates the Put ?

 Thanks

 On Sun, Nov 2, 2014 at 1:02 PM, Serega Sheypak serega.shey...@gmail.com
 wrote:

  Hi, I'm migrating from CDH4 to CDH5 (hbase 0.98.6-cdh5.2.0)
  I had a unit test for mapper used to create HFile and bulk load later.
 
  I've bumped maven deps from cdh4 to cdh5 0.98.6-cdh5.2.0
  Now I've started to get exception
 
  java.lang.IllegalStateException: No applicable class implementing
  Serialization in conf at io.serializations: class
  org.apache.hadoop.hbase.client.Put
  at
 com.google.common.base.Preconditions.checkState(Preconditions.java:149)
  at
 
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:75)
  at
 
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:97)
  at
 
 
 org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
  at
 
 
 org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
  at
 
 
 org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
  at
 
 
 org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
  at
 
 
 org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
  at
 
 
 org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
  at
 
 
 org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:51)
  at
 
 
 org.apache.hadoop.mapreduce.Mapper$Context$$EnhancerByMockitoWithCGLIB$$ba4633fb.write(generated)
 
 
  And here is mapper code:
 
 
 
  public class ItemRecommendationHBaseMapper extends MapperLongWritable,
  BytesWritable, ImmutableBytesWritable, Put {
 
  private final ImmutableBytesWritable hbaseKey = new
  ImmutableBytesWritable();
  private final DynamicObjectSerDeItemRecommendation serde = new
  DynamicObjectSerDeItemRecommendation(ItemRecommendation.class);
 
  @Override
  protected void map(LongWritable key, BytesWritable value, Context
  context) throws IOException, InterruptedException {
  checkPreconditions(key, value);
  hbaseKey.set(Bytes.toBytes(key.get()));
 
  ItemRecommendation item = serde.deserialize(value.getBytes());
  checkPreconditions(item);
  Put put = PutFactory.createPut(serde, item, getColumnFamily());
 
  context.write(hbaseKey, put); //Exception here
  }
 
  Whatcan i do in order to make unit-test pass?
 



Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Sean Busbey
In the 0.94.x API, Put implemented Writable[1]. This meant that MR code,
like yours, could use it as a Key or Value between Mapper and Reducer.

In 0.96 and later APIs, Put no longer directly implements Writable[2].
Instead, HBase now includes a Hadoop Seriazliation implementation.
Normally, this would be configured via the TableMapReduceUtil class for
either a TableMapper or TableReducer.

Presuming that the intention of your MR job is to have all the Puts write
to some HBase table, you should be able to follow the write to HBase part
of the examples for reading and writing HBase via mapreduce in the
reference guide[3].

Specifically, you should have your Driver call one of the
initTableReducerJob methods on TableMapReduceUtil, where it currently sets
the Mapper class for your application[4].

-Sean

[1]:
http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Put.html
[2]: http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html
[3]: http://hbase.apache.org/book/mapreduce.example.html
[4]:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html


On Sun, Nov 2, 2014 at 3:02 PM, Serega Sheypak serega.shey...@gmail.com
wrote:

 Hi, I'm migrating from CDH4 to CDH5 (hbase 0.98.6-cdh5.2.0)
 I had a unit test for mapper used to create HFile and bulk load later.

 I've bumped maven deps from cdh4 to cdh5 0.98.6-cdh5.2.0
 Now I've started to get exception

 java.lang.IllegalStateException: No applicable class implementing
 Serialization in conf at io.serializations: class
 org.apache.hadoop.hbase.client.Put
 at com.google.common.base.Preconditions.checkState(Preconditions.java:149)
 at

 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:75)
 at

 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:97)
 at

 org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
 at

 org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
 at

 org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
 at

 org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
 at

 org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
 at

 org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
 at

 org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:51)
 at

 org.apache.hadoop.mapreduce.Mapper$Context$$EnhancerByMockitoWithCGLIB$$ba4633fb.write(generated)


 And here is mapper code:



 public class ItemRecommendationHBaseMapper extends MapperLongWritable,
 BytesWritable, ImmutableBytesWritable, Put {

 private final ImmutableBytesWritable hbaseKey = new
 ImmutableBytesWritable();
 private final DynamicObjectSerDeItemRecommendation serde = new
 DynamicObjectSerDeItemRecommendation(ItemRecommendation.class);

 @Override
 protected void map(LongWritable key, BytesWritable value, Context
 context) throws IOException, InterruptedException {
 checkPreconditions(key, value);
 hbaseKey.set(Bytes.toBytes(key.get()));

 ItemRecommendation item = serde.deserialize(value.getBytes());
 checkPreconditions(item);
 Put put = PutFactory.createPut(serde, item, getColumnFamily());

 context.write(hbaseKey, put); //Exception here
 }

 Whatcan i do in order to make unit-test pass?




-- 
Sean


Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Ted Yu
Since your code uses mrunit which hbase 0.98 is not dependent on, have you
considered asking this question on:
mrunit-u...@incubator.apache.org

Cheers

On Sun, Nov 2, 2014 at 1:22 PM, Serega Sheypak serega.shey...@gmail.com
wrote:

  public static Put createPut(DynamicObjectSerDeItemRecommendation serde,
 ItemRecommendation item, String columnFamily){
 Put put = new Put(Bytes.toBytes(Long.valueOf(item.getId(;
 put.add(Bytes.toBytes(columnFamily), Bytes.toBytes(item.getRank()),
 serde.serialize(item));
 return put;
 }

 2014-11-03 0:12 GMT+03:00 Ted Yu yuzhih...@gmail.com:

  bq. PutFactory.createPut(
 
  Can you reveal how PutFactory creates the Put ?
 
  Thanks
 
  On Sun, Nov 2, 2014 at 1:02 PM, Serega Sheypak serega.shey...@gmail.com
 
  wrote:
 
   Hi, I'm migrating from CDH4 to CDH5 (hbase 0.98.6-cdh5.2.0)
   I had a unit test for mapper used to create HFile and bulk load later.
  
   I've bumped maven deps from cdh4 to cdh5 0.98.6-cdh5.2.0
   Now I've started to get exception
  
   java.lang.IllegalStateException: No applicable class implementing
   Serialization in conf at io.serializations: class
   org.apache.hadoop.hbase.client.Put
   at
  com.google.common.base.Preconditions.checkState(Preconditions.java:149)
   at
  
  
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:75)
   at
  
  
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:97)
   at
  
  
 
 org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
   at
  
  
 
 org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
   at
  
  
 
 org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
   at
  
  
 
 org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
   at
  
  
 
 org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
   at
  
  
 
 org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
   at
  
  
 
 org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:51)
   at
  
  
 
 org.apache.hadoop.mapreduce.Mapper$Context$$EnhancerByMockitoWithCGLIB$$ba4633fb.write(generated)
  
  
   And here is mapper code:
  
  
  
   public class ItemRecommendationHBaseMapper extends MapperLongWritable,
   BytesWritable, ImmutableBytesWritable, Put {
  
   private final ImmutableBytesWritable hbaseKey = new
   ImmutableBytesWritable();
   private final DynamicObjectSerDeItemRecommendation serde = new
   DynamicObjectSerDeItemRecommendation(ItemRecommendation.class);
  
   @Override
   protected void map(LongWritable key, BytesWritable value, Context
   context) throws IOException, InterruptedException {
   checkPreconditions(key, value);
   hbaseKey.set(Bytes.toBytes(key.get()));
  
   ItemRecommendation item = serde.deserialize(value.getBytes());
   checkPreconditions(item);
   Put put = PutFactory.createPut(serde, item, getColumnFamily());
  
   context.write(hbaseKey, put); //Exception here
   }
  
   Whatcan i do in order to make unit-test pass?
  
 



Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Serega Sheypak
I use it to prepare HFile using my custom mapper emitting Put and
  HFileOutputFormat.configureIncrementalLoad(job, createHTable())
//connection to target table

and then bulk load data to table using LoadIncrementalHFiles

P.S.
HFileOutputFormat is also deprecated... so many changes... (((


2014-11-03 0:41 GMT+03:00 Sean Busbey bus...@cloudera.com:

 In the 0.94.x API, Put implemented Writable[1]. This meant that MR code,
 like yours, could use it as a Key or Value between Mapper and Reducer.

 In 0.96 and later APIs, Put no longer directly implements Writable[2].
 Instead, HBase now includes a Hadoop Seriazliation implementation.
 Normally, this would be configured via the TableMapReduceUtil class for
 either a TableMapper or TableReducer.

 Presuming that the intention of your MR job is to have all the Puts write
 to some HBase table, you should be able to follow the write to HBase part
 of the examples for reading and writing HBase via mapreduce in the
 reference guide[3].

 Specifically, you should have your Driver call one of the
 initTableReducerJob methods on TableMapReduceUtil, where it currently sets
 the Mapper class for your application[4].

 -Sean

 [1]:

 http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Put.html
 [2]:
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html
 [3]: http://hbase.apache.org/book/mapreduce.example.html
 [4]:

 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html


 On Sun, Nov 2, 2014 at 3:02 PM, Serega Sheypak serega.shey...@gmail.com
 wrote:

  Hi, I'm migrating from CDH4 to CDH5 (hbase 0.98.6-cdh5.2.0)
  I had a unit test for mapper used to create HFile and bulk load later.
 
  I've bumped maven deps from cdh4 to cdh5 0.98.6-cdh5.2.0
  Now I've started to get exception
 
  java.lang.IllegalStateException: No applicable class implementing
  Serialization in conf at io.serializations: class
  org.apache.hadoop.hbase.client.Put
  at
 com.google.common.base.Preconditions.checkState(Preconditions.java:149)
  at
 
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:75)
  at
 
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:97)
  at
 
 
 org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
  at
 
 
 org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
  at
 
 
 org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
  at
 
 
 org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
  at
 
 
 org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
  at
 
 
 org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
  at
 
 
 org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:51)
  at
 
 
 org.apache.hadoop.mapreduce.Mapper$Context$$EnhancerByMockitoWithCGLIB$$ba4633fb.write(generated)
 
 
  And here is mapper code:
 
 
 
  public class ItemRecommendationHBaseMapper extends MapperLongWritable,
  BytesWritable, ImmutableBytesWritable, Put {
 
  private final ImmutableBytesWritable hbaseKey = new
  ImmutableBytesWritable();
  private final DynamicObjectSerDeItemRecommendation serde = new
  DynamicObjectSerDeItemRecommendation(ItemRecommendation.class);
 
  @Override
  protected void map(LongWritable key, BytesWritable value, Context
  context) throws IOException, InterruptedException {
  checkPreconditions(key, value);
  hbaseKey.set(Bytes.toBytes(key.get()));
 
  ItemRecommendation item = serde.deserialize(value.getBytes());
  checkPreconditions(item);
  Put put = PutFactory.createPut(serde, item, getColumnFamily());
 
  context.write(hbaseKey, put); //Exception here
  }
 
  Whatcan i do in order to make unit-test pass?
 



 --
 Sean



Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Sean Busbey
If you're calling HFileOutputFormat.configureIncrementalLoad, that should
be setting up the Serialization for you.

Can you look at the job configuration and see what's present for the key
io.serializations?

-Sean

On Sun, Nov 2, 2014 at 3:53 PM, Serega Sheypak serega.shey...@gmail.com
wrote:

 I use it to prepare HFile using my custom mapper emitting Put and
   HFileOutputFormat.configureIncrementalLoad(job, createHTable())
 //connection to target table

 and then bulk load data to table using LoadIncrementalHFiles

 P.S.
 HFileOutputFormat is also deprecated... so many changes... (((


 2014-11-03 0:41 GMT+03:00 Sean Busbey bus...@cloudera.com:

  In the 0.94.x API, Put implemented Writable[1]. This meant that MR code,
  like yours, could use it as a Key or Value between Mapper and Reducer.
 
  In 0.96 and later APIs, Put no longer directly implements Writable[2].
  Instead, HBase now includes a Hadoop Seriazliation implementation.
  Normally, this would be configured via the TableMapReduceUtil class for
  either a TableMapper or TableReducer.
 
  Presuming that the intention of your MR job is to have all the Puts write
  to some HBase table, you should be able to follow the write to HBase
 part
  of the examples for reading and writing HBase via mapreduce in the
  reference guide[3].
 
  Specifically, you should have your Driver call one of the
  initTableReducerJob methods on TableMapReduceUtil, where it currently
 sets
  the Mapper class for your application[4].
 
  -Sean
 
  [1]:
 
 
 http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Put.html
  [2]:
  http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html
  [3]: http://hbase.apache.org/book/mapreduce.example.html
  [4]:
 
 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html
 
 
  On Sun, Nov 2, 2014 at 3:02 PM, Serega Sheypak serega.shey...@gmail.com
 
  wrote:
 
   Hi, I'm migrating from CDH4 to CDH5 (hbase 0.98.6-cdh5.2.0)
   I had a unit test for mapper used to create HFile and bulk load later.
  
   I've bumped maven deps from cdh4 to cdh5 0.98.6-cdh5.2.0
   Now I've started to get exception
  
   java.lang.IllegalStateException: No applicable class implementing
   Serialization in conf at io.serializations: class
   org.apache.hadoop.hbase.client.Put
   at
  com.google.common.base.Preconditions.checkState(Preconditions.java:149)
   at
  
  
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:75)
   at
  
  
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:97)
   at
  
  
 
 org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
   at
  
  
 
 org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
   at
  
  
 
 org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
   at
  
  
 
 org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
   at
  
  
 
 org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
   at
  
  
 
 org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
   at
  
  
 
 org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:51)
   at
  
  
 
 org.apache.hadoop.mapreduce.Mapper$Context$$EnhancerByMockitoWithCGLIB$$ba4633fb.write(generated)
  
  
   And here is mapper code:
  
  
  
   public class ItemRecommendationHBaseMapper extends MapperLongWritable,
   BytesWritable, ImmutableBytesWritable, Put {
  
   private final ImmutableBytesWritable hbaseKey = new
   ImmutableBytesWritable();
   private final DynamicObjectSerDeItemRecommendation serde = new
   DynamicObjectSerDeItemRecommendation(ItemRecommendation.class);
  
   @Override
   protected void map(LongWritable key, BytesWritable value, Context
   context) throws IOException, InterruptedException {
   checkPreconditions(key, value);
   hbaseKey.set(Bytes.toBytes(key.get()));
  
   ItemRecommendation item = serde.deserialize(value.getBytes());
   checkPreconditions(item);
   Put put = PutFactory.createPut(serde, item, getColumnFamily());
  
   context.write(hbaseKey, put); //Exception here
   }
  
   Whatcan i do in order to make unit-test pass?
  
 
 
 
  --
  Sean
 




-- 
Sean


Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Sean Busbey
On Sun, Nov 2, 2014 at 3:53 PM, Serega Sheypak serega.shey...@gmail.com
wrote:

 P.S.
 HFileOutputFormat is also deprecated... so many changes... (((



Incidentally, you should consider switching to HFileOutputFormat2, since
you rely on the version that has a Mapper outputting Put values instead of
KeyValue the impact on you should be negligible.


-- 
Sean


Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Serega Sheypak
Sean, I've started to catch that serialization problem on unit-test level
while using mrunit.
I don't see any possibility to call HFileOutputFormat.configureIncrementalLoad
before mrunit mocking stuff.
I was workngi w/o any problem in 0.94 :)


2014-11-03 1:08 GMT+03:00 Sean Busbey bus...@cloudera.com:

 If you're calling HFileOutputFormat.configureIncrementalLoad, that should
 be setting up the Serialization for you.

 Can you look at the job configuration and see what's present for the key
 io.serializations?

 -Sean

 On Sun, Nov 2, 2014 at 3:53 PM, Serega Sheypak serega.shey...@gmail.com
 wrote:

  I use it to prepare HFile using my custom mapper emitting Put and
HFileOutputFormat.configureIncrementalLoad(job, createHTable())
  //connection to target table
 
  and then bulk load data to table using LoadIncrementalHFiles
 
  P.S.
  HFileOutputFormat is also deprecated... so many changes... (((
 
 
  2014-11-03 0:41 GMT+03:00 Sean Busbey bus...@cloudera.com:
 
   In the 0.94.x API, Put implemented Writable[1]. This meant that MR
 code,
   like yours, could use it as a Key or Value between Mapper and Reducer.
  
   In 0.96 and later APIs, Put no longer directly implements Writable[2].
   Instead, HBase now includes a Hadoop Seriazliation implementation.
   Normally, this would be configured via the TableMapReduceUtil class for
   either a TableMapper or TableReducer.
  
   Presuming that the intention of your MR job is to have all the Puts
 write
   to some HBase table, you should be able to follow the write to HBase
  part
   of the examples for reading and writing HBase via mapreduce in the
   reference guide[3].
  
   Specifically, you should have your Driver call one of the
   initTableReducerJob methods on TableMapReduceUtil, where it currently
  sets
   the Mapper class for your application[4].
  
   -Sean
  
   [1]:
  
  
 
 http://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Put.html
   [2]:
  
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html
   [3]: http://hbase.apache.org/book/mapreduce.example.html
   [4]:
  
  
 
 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html
  
  
   On Sun, Nov 2, 2014 at 3:02 PM, Serega Sheypak 
 serega.shey...@gmail.com
  
   wrote:
  
Hi, I'm migrating from CDH4 to CDH5 (hbase 0.98.6-cdh5.2.0)
I had a unit test for mapper used to create HFile and bulk load
 later.
   
I've bumped maven deps from cdh4 to cdh5 0.98.6-cdh5.2.0
Now I've started to get exception
   
java.lang.IllegalStateException: No applicable class implementing
Serialization in conf at io.serializations: class
org.apache.hadoop.hbase.client.Put
at
   com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at
   
   
  
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:75)
at
   
   
  
 
 org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:97)
at
   
   
  
 
 org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
at
   
   
  
 
 org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
at
   
   
  
 
 org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
at
   
   
  
 
 org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
at
   
   
  
 
 org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
at
   
   
  
 
 org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
at
   
   
  
 
 org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:51)
at
   
   
  
 
 org.apache.hadoop.mapreduce.Mapper$Context$$EnhancerByMockitoWithCGLIB$$ba4633fb.write(generated)
   
   
And here is mapper code:
   
   
   
public class ItemRecommendationHBaseMapper extends
 MapperLongWritable,
BytesWritable, ImmutableBytesWritable, Put {
   
private final ImmutableBytesWritable hbaseKey = new
ImmutableBytesWritable();
private final DynamicObjectSerDeItemRecommendation serde = new
DynamicObjectSerDeItemRecommendation(ItemRecommendation.class);
   
@Override
protected void map(LongWritable key, BytesWritable value, Context
context) throws IOException, InterruptedException {
checkPreconditions(key, value);
hbaseKey.set(Bytes.toBytes(key.get()));
   
ItemRecommendation item =
 serde.deserialize(value.getBytes());
checkPreconditions(item);
Put put = PutFactory.createPut(serde, item,
 getColumnFamily());
   
context.write(hbaseKey, put); //Exception here
}
   
Whatcan i do in order to make unit-test pass?
   
  
  
  
   --
   Sean
  
 



 --
 Sean



Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Serega Sheypak
Cool, is it this stuff?
http://hbase.apache.org/book/hfilev2.html

2014-11-03 1:10 GMT+03:00 Sean Busbey bus...@cloudera.com:

 On Sun, Nov 2, 2014 at 3:53 PM, Serega Sheypak serega.shey...@gmail.com
 wrote:

  P.S.
  HFileOutputFormat is also deprecated... so many changes... (((
 
 

 Incidentally, you should consider switching to HFileOutputFormat2, since
 you rely on the version that has a Mapper outputting Put values instead of
 KeyValue the impact on you should be negligible.


 --
 Sean



Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Sean Busbey
On Sun, Nov 2, 2014 at 4:11 PM, Serega Sheypak serega.shey...@gmail.com
wrote:

 Sean, I've started to catch that serialization problem on unit-test level
 while using mrunit.
 I don't see any possibility to call
 HFileOutputFormat.configureIncrementalLoad
 before mrunit mocking stuff.
 I was workngi w/o any problem in 0.94 :)



Ah. Well, that sounds like a bug in MRUnit for dealing with HBase 0.96+.

As Ted mentioned, you'll have more luck on the mrunit mailing list for
figuring that bit out.

-- 
Sean


Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Sean Busbey
On Sun, Nov 2, 2014 at 4:14 PM, Serega Sheypak serega.shey...@gmail.com
wrote:

 Cool, is it this stuff?
 http://hbase.apache.org/book/hfilev2.html


No, that's all details on the update to the backing HFile format that
started in HBase 0.92. The change in output format is detailed here:

http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html

It's basically just that the output format changed from using the old
private KeyValue class to the public one for Cell.

-- 
Sean


Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Ted Yu
bq. context.write(hbaseKey, put); //Exception here

I am not mrunit expert. But as long as you call the following method prior
to the above method invocation, you should be able to proceed:

conf.setStrings(io.serializations, conf.get(io.serializations),

MutationSerialization.class.getName(), ResultSerialization.class
.getName(),

KeyValueSerialization.class.getName());

Cheers

On Sun, Nov 2, 2014 at 2:24 PM, Sean Busbey bus...@cloudera.com wrote:

 On Sun, Nov 2, 2014 at 4:11 PM, Serega Sheypak serega.shey...@gmail.com
 wrote:

  Sean, I've started to catch that serialization problem on unit-test level
  while using mrunit.
  I don't see any possibility to call
  HFileOutputFormat.configureIncrementalLoad
  before mrunit mocking stuff.
  I was workngi w/o any problem in 0.94 :)
 
 
 
 Ah. Well, that sounds like a bug in MRUnit for dealing with HBase 0.96+.

 As Ted mentioned, you'll have more luck on the mrunit mailing list for
 figuring that bit out.

 --
 Sean



Re: No applicable class implementing Serialization in conf at io.serializations: class org.apache.hadoop.hbase.client.Put

2014-11-02 Thread Sean Busbey
On Sun, Nov 2, 2014 at 5:09 PM, Ted Yu yuzhih...@gmail.com wrote:

 bq. context.write(hbaseKey, put); //Exception here

 I am not mrunit expert. But as long as you call the following method prior
 to the above method invocation, you should be able to proceed:

 conf.setStrings(io.serializations, conf.get(io.serializations),

 MutationSerialization.class.getName(), ResultSerialization.class
 .getName(),

 KeyValueSerialization.class.getName());



Those classes are not a part of the public HBase API, so directly
referencing them is a bad idea. Doing so just sets them up to break on some
future HBase upgrade.

The OP needs a place in MRUnit to call one of
HFileOutputFormat.configureIncrementalLoad,
HFileOutputFormat2.configureIncrementalLoad, or
TableMapReduceUtil.initTableReducerJob. Those are the only public API ways
to configure the needed Serialization.

-- 
Sean


Re: Increasing write throughput..

2014-11-02 Thread Anoop John
You have ~280 regions per RS.
And ur memstore size % is 40% and heap size 48GB
This mean the heap size for memstore is 48 * 0.4 = 19.2GB  ( I am just
considering the upper water mark alone)

If u have to consider all 280 regions each with 512 MB heap you need much
more size of heap.   And your writes are distributed to all regions right?

So you will be seeing flushes because of global heap pressure.

Increasing the xmx and flush size alone wont help.  You need to consider
the regions# and writes

When you tune this the next step will be to tune the HLog and its rolling.
That depends on your cell size as well.
By default when we reach 95% size of HDFS block size, we roll to a new HLog
file. And by default when we reach 32 Log files, we force flushes.  FYI.

-Anoop-


On Sat, Nov 1, 2014 at 10:54 PM, Ted Yu yuzhih...@gmail.com wrote:

 Please read 9.7.7.2. MemStoreFlush under
 http://hbase.apache.org/book.html#regions.arch

 Cheers

 On Fri, Oct 31, 2014 at 11:16 AM, Gautam Kowshik gautamkows...@gmail.com
 wrote:

  - Sorry bout the raw image upload, here’s the tsdb snapshot :
  http://postimg.org/image/gq4nf96x9/
  - Hbase version 98.1 (CDH 5.1 distro)
  - hbase-site pastebin : http://pastebin.com/fEctQ3im
  - this table ‘msg' has been pre-split with 240 regions and writes are
  evenly distributed into 240 buckets. ( the bucket is a prefix to the row
  key ) . These regions are well spread across the 8 RSs. Although over
 time
  these 240 have split and now become 2440 .. each region server has ~280
  regions.
  - last 500 lines of log from one RS : http://pastebin.com/8MwYMZPb Al
  - no hot regions from what i can tell.
 
  One of my main concerns was why even after setting the memstore flush
 size
  to 512M is it still flushing at 128M. Is there a setting i’v missed ? I’l
  try to get more details as i find them.
 
  Thanks and Cheers,
  -Gautam.
 
  On Oct 31, 2014, at 10:47 AM, Stack st...@duboce.net wrote:
 
   What version of hbase (later versions have improvements in write
   throughput, especially when many writing threads).  Post a pastebin of
   regionserver log in steadystate if you don't mind.  About how many
  writers
   going into server at a time?  How many regions on server.  All being
   written to at same rate or you have hotties?
   Thanks,
   St.Ack
  
   On Fri, Oct 31, 2014 at 10:22 AM, Gautam gautamkows...@gmail.com
  wrote:
  
   I'm trying to increase write throughput of our hbase cluster. we'r
   currently doing around 7500 messages per sec per node. I think we have
  room
   for improvement. Especially since the heap is under utilized and
  memstore
   size doesn't seem to fluctuate much between regular and peak ingestion
   loads.
  
   We mainly have one large table that we write most of the data to.
 Other
   tables are mainly opentsdb and some relatively small summary tables.
  This
   table is read in batch once a day but otherwise is mostly serving
 writes
   99% of the time. This large table has 1 CF and get's flushed at around
   ~128M fairly regularly like below..
  
   {log}
  
   2014-10-31 16:56:09,499 INFO
  org.apache.hadoop.hbase.regionserver.HRegion:
   Finished memstore flush of ~128.2 M/134459888, currentsize=879.5
  K/900640
   for region
  
 
 msg,00102014100515impression\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002014100515040200049358\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x004138647301\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0002e5a329d2171149bcc1e83ed129312b\x00\x00\x00\x00,1413909604591.828e03c0475b699278256d4b5b9638a2.
   in 640ms, sequenceid=16861176169, compaction requested=true
  
   {log}
  
   Here's a pastebin of my hbase site : http://pastebin.com/fEctQ3im
  
   What i'v tried..
   -  turned of major compactions , and handling these manually.
   -  bumped up heap Xmx from 24G to 48 G
   -  hbase.hregion.memstore.flush.size = 512M
   - lowerLimit/ upperLimit on memstore are defaults (0.38 , 0.4) since
 the
   global heap has enough space to accommodate the default percentages.
   - Currently running Hbase 98.1 on an 8 node cluster that's scaled up
 to
   128GB RAM.
  
  
   There hasn't been any appreciable increase in write perf. Still
 hovering
   around the 7500 per node write throughput number. The flushes still
  seem to
   be hapenning at 128M (instead of the expected 512)
  
   I'v attached a snapshot of the memstore size vs. flushQueueLen. the
  block
   caches are utilizing the extra heap space but not the memstore. The
  flush
   Queue lengths have increased which leads me to believe that it's
  flushing
   way too often without any increase in throughput.
  
   Please let me know where i should dig further. That's a long email,
  thanks
   for reading through :-)
  
  
  
   Cheers,
   -Gautam.
  
 
 



is there a HBase 0.98 hdfs directory structure introduction?

2014-11-02 Thread Liu, Ming (HPIT-GADSC)
Hi, all,

I have a program to calculate the disk usage of hbase per table in hbase 0.94. 
I used to use the hadoop fs -du command against directory $roodir/table as 
the size a table uses, as described in HBase's ref guide: 
http://hbase.apache.org/book/trouble.namenode.html .
However, when we upgraded to HBase 0.98, the directory structure changed a lot. 
Yes, I can use ls to find the table directory and modify the program myself, 
but I wish there will be a good reference to learn more details about the 
change. The document on hbase official web site seems not updated. So can 
anyone help to briefly introduce the new directory structure or give me a link? 
It will be good to know what each directory is for.

Thanks,
Ming


Re: is there a HBase 0.98 hdfs directory structure introduction?

2014-11-02 Thread Ted Yu
In 0.98, you would find your table under the following directory:
$roodir/{namespace}/table

If you don't specify namespace at table creation time, 'default' namespace
would be used.

Cheers

On Sun, Nov 2, 2014 at 7:16 PM, Liu, Ming (HPIT-GADSC) ming.l...@hp.com
wrote:

 Hi, all,

 I have a program to calculate the disk usage of hbase per table in hbase
 0.94. I used to use the hadoop fs -du command against directory
 $roodir/table as the size a table uses, as described in HBase's ref
 guide: http://hbase.apache.org/book/trouble.namenode.html .
 However, when we upgraded to HBase 0.98, the directory structure changed a
 lot. Yes, I can use ls to find the table directory and modify the program
 myself, but I wish there will be a good reference to learn more details
 about the change. The document on hbase official web site seems not
 updated. So can anyone help to briefly introduce the new directory
 structure or give me a link? It will be good to know what each directory is
 for.

 Thanks,
 Ming



Authenticate from SQL

2014-11-02 Thread Margusja

Hi

I am looking solutions where users before using HBase rest will be 
authenticate from SQL (in example from Oracle).

Is there any best practices or ready solutions for HBase?

--
Best regards, Margus Roo
skype: margusja
phone: +372 51 48 780
web: http://margus.roo.ee