Hi,

I didn't found the content of the conf/regionservers files attached.

Could you check whether it contains "localhost" or "host"/"myHost"? It
should contain "host" or "myHost".

Did you edit the content of the files for the attachment? Sometimes it
is "myHost", sometimes it "host". This should be consistent.

Furthermore I once read, that someone had a problem with ipv6 and hbase
(thus all my installations does not use ipv6 ... just for safety).
Perhaps you should turn that off and restart hadoop and hbase, too. Just
as a test.

And yes, it is strange that all the other stuff is working.

Hope this helps,

Wilm

Am 17.12.2014 um 13:40 schrieb Marco:
> Hi Wilm,
>
> I've attached the logs. The region server logs only contain debug
> messages and mostly like the pattern, which I've pasted.
> I'm using the HortonWorksStack and have a single machine, on which
> runs the complete stack (no cluster).
>
> Hbase shell, Hive and Apache Phoenix works fine.
>
> BR Marco
>
> 2014-12-17 11:41 GMT+01:00 Wilm Schumacher <wilm.schumac...@gmail.com>:
>> Could you please post the
>>
>> /etc/hosts
>> ./conf/hbase-site.conf
>> ./conf/regionservers
>> ./log/hbase*regionsserver.log
>>
>> ?
>>
>> The error says, that your regionserver is not running (or something
>> happend with the server). This could mean, that
>> a) the regionserver never started
>> b) the regionserver died
>> c) the regionserver is not available
>> ...
>>
>> which could have many reasons. The strange thing is, that zookeeper
>> seems to connect. So if you post the files from above, perhaps we can help.
>>
>> Best wishes,
>>
>> Wilm
>>
>> Am 16.12.2014 um 15:19 schrieb Marco:
>>> Hi,
>>>
>>> Hbase is installed correctly and working (hbase shell works fine).
>>>
>>> But I'm not able to use the Java API to connect to an existing Hbase Table:
>>>
>>> <<<
>>> val conf = HBaseConfiguration.create()
>>>
>>> conf.clear()
>>>
>>> conf.set("hbase.zookeeper.quorum", "ip:2181");
>>> conf.set("hbase.zookeeper.property.clientPort", "2181");
>>> conf.set("hbase.zookeeper.dns.nameserver", "ip");
>>> conf.set("hbase.regionserver.port","60020");
>>> conf.set("hbase.master", "ip:60000");
>>>
>>> val hTable = new HTable(conf, "truck_events")
>>>
>>> Actually the coding is Scala but I think it is understandable, what I
>>> am trying to achieve. I've tried also to use hbase-site.xml instead of
>>> manually configuring it -  but the result is the same.
>>>
>>> As response I got
>>> 14/12/16 15:10:05 INFO zookeeper.ZooKeeper: Initiating client
>>> connection, connectString=ip:2181 sessionTimeout=30000
>>> watcher=hconnection
>>> 14/12/16 15:10:10 INFO zookeeper.ClientCnxn: Opening socket connection
>>> to server ip:2181. Will not attempt to authenticate using SASL
>>> (unknown error)
>>> 14/12/16 15:10:10 INFO zookeeper.ClientCnxn: Socket connection
>>> established to ip:2181, initiating session
>>> 14/12/16 15:10:10 INFO zookeeper.ClientCnxn: Session establishment
>>> complete on server ip:2181, sessionid = 0x14a53583e080010, negotiated
>>> timeout = 30000
>>>
>>> and then finally after a couple of minutes: (the constructor call of
>>> HTable is hanging)
>>>
>>> [error] (run-main-0)
>>> org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to
>>> find region for truck_events,,99999999999999 after 14 tries.
>>> org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to
>>> find region for truck_events,,99999999999999 after 14 tries.
>>>         at 
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1092)
>>>         at 
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:997)
>>>         at 
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1099)
>>>         at 
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1001)
>>>         at 
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:958)
>>>         at 
>>> org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251)
>>>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:155)
>>>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:129)
>>>         at HbaseConnector$.main(HbaseConnector.scala:18)
>>>         at HbaseConnector.main(HbaseConnector.scala)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>         at 
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>         at 
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>         at java.lang.reflect.Method.invoke(Method.java:606)
>>> [trace] Stack trace suppressed: run last compile:run for the full output.
>>> 14/12/16 13:22:15 ERROR zookeeper.ClientCnxn: Event thread exiting due
>>> to interruption
>>> java.lang.InterruptedException
>>>         at 
>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
>>>         at 
>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
>>>         at 
>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>         at 
>>> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:491)
>>> 14/12/16 13:22:15 INFO zookeeper.ClientCnxn: EventThread shut down
>>> java.lang.RuntimeException: Nonzero exit code: 1
>>>         at scala.sys.package$.error(package.scala:27)
>>> [trace] Stack trace suppressed: run last compile:run for the full output.
>>> [error] (compile:run) Nonzero exit code: 1
>>> [error] Total time: 1106 s, completed Dec 16, 2014 1:22:15 PM
>>>
>>>
>>> In the RegionServer log, I've seen this:
>>>
>>> 2014-12-16 13:31:34,087 DEBUG [RpcServer.listener,port=60020]
>>> ipc.RpcServer: RpcServer.listener,port=60020: connection from
>>> 10.97.68.159:41772; # active connections: 1
>>> 2014-12-16 13:33:34,220 DEBUG [RpcServer.reader=1,port=60020]
>>> ipc.RpcServer: RpcServer.listener,port=60020: DISCONNECTING client
>>> 10.97.68.159:41772 because read count=-1. Number of active
>>> connections: 1
>>> 2014-12-16 13:36:26,988 DEBUG [LruStats #0] hfile.LruBlockCache:
>>> Total=430.02 KB, free=401.18 MB, max=401.60 MB, blockCount=4,
>>> accesses=28, hits=24, hitRatio=85.71%, , cachingAccesses=28,
>>> cachingHits=24, cachingHitsRatio=85.71%, evictions=269, evicted=0,
>>> evictedPerRun=0.0
>>> 2014-12-16 13:36:34,017 DEBUG [RpcServer.listener,port=60020]
>>> ipc.RpcServer: RpcServer.listener,port=60020: connection from
>>> 10.97.68.159:42728; # active connections: 1
>>> 2014-12-16 13:38:34,112 DEBUG [RpcServer.reader=2,port=60020]
>>> ipc.RpcServer: RpcServer.listener,port=60020: DISCONNECTING client
>>> 10.97.68.159:42728 because read count=-1. Number of active
>>> connections: 1
>>> 2014-12-16 13:41:26,989 DEBUG [LruStats #0] hfile.LruBlockCache:
>>> Total=430.02 KB, free=401.18 MB, max=401.60 MB, blockCount=4,
>>> accesses=30, hits=26, hitRatio=86.67%, , cachingAccesses=30,
>>> cachingHits=26, cachingHitsRatio=86.67%, evictions=299, evicted=0,
>>> evictedPerRun=0.0
>>>
>>> So it connects and disconnects with read count -1 .
>>>
>>> Can anybody help me finding the root cause of this issue ? I've tried
>>> to restart Hbase and so on but with no effect. Hive is also working
>>> fine, just not my coding :(
>>>
>>> Thanks a lot,
>>> Marco
>
>

Reply via email to