I am assuming that you can access all the machines from your client so machines 
are totally reachable from each other. If python client can access the table 
maybe it's a classpath issue. Did you make sure that you have the right version 
of the hbase jars and the hbase-site.xml in your classpath?

-----Original Message-----
From: Russell Jurney [mailto:russell.jur...@gmail.com] 
Sent: Monday, January 20, 2014 12:17 PM
To: user@pig.apache.org
Subject: Re: Issue connectiong to HBase using Pig's HBaseStorage: Unable to 
find region for my_table

All regionservers are up, and I can access the table/columnfamily via 
Python/Starbase/REST API:

In [27]: table.insert('my-key-1', { 'bytes_per_hour_time_series':
{'series': "test"}})

Out[27]: 200

In [29]: table.fetch('my-key-1')

Out[29]: {'bytes_per_hour_time_series': {'series': 'test'}}


On Mon, Jan 20, 2014 at 10:58 AM, Yigitbasi, Nezih < nezih.yigitb...@intel.com> 
wrote:

> The log says " NoServerForRegionException: Unable to find region for 
> bluecoat ". Are all region servers up & running? Also can you do any 
> "put"s to this table through the hbase shell?
>
> -----Original Message-----
> From: Russell Jurney [mailto:russell.jur...@gmail.com]
> Sent: Monday, January 20, 2014 10:22 AM
> To: user@pig.apache.org
> Subject: Issue connectiong to HBase using Pig's HBaseStorage: Unable 
> to find region for my_table
>
> I'm having trouble connecting to HBase from Pig's HBaseStorage< 
> http://pig.apache.org/docs/r0.12.0/api/org/apache/pig/backend/hadoop/h
> base/HBaseStorage.html
> >
> command.
> Any help would be appreciated.
>
> I'm running this command:
>
> time_series = LOAD '/tmp/time_series.txt' AS (date_time:chararray, 
> time_series:chararray);
>
> STORE time_series INTO 'hbase://bluecoat' USING 
> org.apache.pig.backend.hadoop.hbase.HBaseStorage('bytes_per_hour_time_
> series:series');
>
>
> The output is thus:
>
> 2014-01-20 08:49:51,896 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client
>
> environment:java.library.path=/Users/rjurney/Software/hadoop-1.0.3/lib
> exec/../lib/native/Mac_OS_X-x86_64-64
> 2014-01-20 08:49:51,896 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client 
> environment:java.io.tmpdir=/var/folders/0b/74l_65015_5fcbmbdz1w2xl4000
> 0gn/T/
> 2014-01-20 08:49:51,896 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client environment:java.compiler=<NA>
> 2014-01-20 08:49:51,896 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client environment:os.name=Mac OS X
> 2014-01-20 08:49:51,897 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client environment:os.arch=x86_64
> 2014-01-20 08:49:51,897 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client environment:os.version=10.9
> 2014-01-20 08:49:51,897 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client environment:user.name=rjurney
> 2014-01-20 08:49:51,897 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client environment:user.home=/Users/rjurney
> 2014-01-20 08:49:51,897 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Client 
> environment:user.dir=/Users/rjurney/Software/steel-thread/pig/bluecoat
> 2014-01-20 08:49:51,899 [main] INFO  org.apache.zookeeper.ZooKeeper - 
> Initiating client connection,
> connectString=hiveapp2:2181,hiveapp1:2181,hiveapp3:2181
> sessionTimeout=60000 watcher=hconnection
> 2014-01-20 08:49:51,926 [main] INFO
>  org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - The 
> identifier of this process is 43006@Russells-MacBook-Pro.local
> 2014-01-20 08:49:52,993 [main-SendThread(hiveapp1:2181)] INFO  
> org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
> hiveapp1/10.10.30.200:2181. Will not attempt to authenticate using 
> SASL (unknown error)
> 2014-01-20 08:49:53,000 [main-SendThread(hiveapp1:2181)] INFO  
> org.apache.zookeeper.ClientCnxn - Socket connection established to 
> hiveapp1/10.10.30.200:2181, initiating session
> 2014-01-20 08:49:53,030 [main-SendThread(hiveapp1:2181)] INFO  
> org.apache.zookeeper.ClientCnxn - Session establishment complete on 
> server hiveapp1/10.10.30.200:2181, sessionid = 0x34397f09f6c25aa, 
> negotiated timeout = 60000
> 2014-01-20 09:00:44,711 [main] ERROR
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat -
> org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to 
> find region for bluecoat,,99999999999999 after 10 tries.
> 2014-01-20 09:00:44,714 [main] ERROR org.apache.pig.tools.grunt.Grunt 
> - ERROR 2999: Unexpected internal error.
> org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to 
> find region for bluecoat,,99999999999999 after 10 tries.
> 2014-01-20 09:00:44,714 [main] ERROR org.apache.pig.tools.grunt.Grunt 
> -
> java.lang.RuntimeException:
> org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to 
> find region for bluecoat,,99999999999999 after 10 tries.
> at
>
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutpu
> tFormat.java:206)
> at
>
> org.apache.pig.backend.hadoop.hbase.HBaseStorage.getOutputFormat(HBase
> Storage.java:826)
> at
>
> org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOut
> putFileVisitor.visit(InputOutputFileValidator.java:80)
> at
> org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:
> 66)
> at
>
> org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.ja
> va:64)
> at
>
> org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.ja
> va:66)
> at
>
> org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.ja
> va:66) at 
> org.apache.pig.newplan.DepthFirstWalker.walk(DepthFirstWalker.java:53)
> at org.apache.pig.newplan.PlanVisitor.visit(PlanVisitor.java:52)
> at
>
> org.apache.pig.newplan.logical.rules.InputOutputFileValidator.validate
> (InputOutputFileValidator.java:45)
> at
>
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.compile
> (HExecutionEngine.java:303) at 
> org.apache.pig.PigServer.compilePp(PigServer.java:1382)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:130
> 7) at org.apache.pig.PigServer.execute(PigServer.java:1299)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:377)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:355)
> at
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:1
> 40)
> at
>
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.ja
> va:202)
> at
>
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.ja
> va:173) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hbase.client.NoServerForRegionException:
> Unable to find region for bluecoat,,99999999999999 after 10 tries.
> at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegionInMeta(HConnectionManager.java:980)
> at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegion(HConnectionManager.java:885)
> at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegionInMeta(HConnectionManager.java:987)
> at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegion(HConnectionManager.java:889)
> at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplement
> ation.locateRegion(HConnectionManager.java:846)
> at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:271)
> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:211)
> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:170)
> at
>
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutpu
> tFormat.java:201)
> ... 26 more
>
>
> --
> Russell Jurney twitter.com/rjurney russell.jur...@gmail.com 
> datasyndrome.com
>



--
Russell Jurney twitter.com/rjurney russell.jur...@gmail.com datasyndrome.com

Reply via email to