How did you distribute hbase-site.xml to the nodes ?

Looks like HConnectionManager couldn't find the hbase:meta server.

Cheers

On Tue, Apr 28, 2015 at 9:19 PM, Tridib Samanta <tridib.sama...@live.com>
wrote:

> I am using Spark 1.2.0 and HBase 0.98.1-cdh5.1.0.
>
> Here is the jstack trace. Complete stack trace attached.
>
> "Executor task launch worker-1" #58 daemon prio=5 os_prio=0
> tid=0x00007fd3d0445000 nid=0x488 waiting on condition [0x00007fd4507d9000]
>    java.lang.Thread.State: TIMED_WAITING (sleeping)
>  at java.lang.Thread.sleep(Native Method)
>  at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:152)
>  - locked <0x00000000f8cb7258> (a
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>  at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:705)
>  at
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:144)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1102)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1162)
>  - locked <0x00000000f84ac0b0> (a java.lang.Object)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1054)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1011)
>  at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:326)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:192)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:150)
>  at com.mypackage.storeTuples(CubeStoreService.java:59)
>  at
> com.mypackage.StorePartitionToHBaseStoreFunction.call(StorePartitionToHBaseStoreFunction.java:23)
>  at
> com.mypackage.StorePartitionToHBaseStoreFunction.call(StorePartitionToHBaseStoreFunction.java:13)
>  at
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
>  at
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
>  at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
>  at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
>  at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
>  at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>  at org.apache.spark.scheduler.Task.run(Task.scala:56)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> "Executor task launch worker-0" #57 daemon prio=5 os_prio=0
> tid=0x00007fd3d0443800 nid=0x487 waiting for monitor entry
> [0x00007fd4506d8000]
>    java.lang.Thread.State: BLOCKED (on object monitor)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1156)
>  - waiting to lock <0x00000000f84ac0b0> (a java.lang.Object)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1054)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1011)
>  at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:326)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:192)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:150)
>  at com.mypackage.storeTuples(CubeStoreService.java:59)
>  at
> com.mypackage.StorePartitionToHBaseStoreFunction.call(StorePartitionToHBaseStoreFunction.java:23)
>  at
> com.mypackage.StorePartitionToHBaseStoreFunction.call(StorePartitionToHBaseStoreFunction.java:13)
>  at
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
>  at
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
>  at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
>  at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
>  at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
>  at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>  at org.apache.spark.scheduler.Task.run(Task.scala:56)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
>
> ------------------------------
> Date: Tue, 28 Apr 2015 19:35:26 -0700
> Subject: Re: HBase HTable constructor hangs
> From: yuzhih...@gmail.com
> To: tridib.sama...@live.com
> CC: user@spark.apache.org
>
> Can you give us more information ?
> Such as hbase release, Spark release.
>
> If you can pastebin jstack of the hanging HTable process, that would help.
>
> BTW I used
> http://search-hadoop.com/?q=spark+HBase+HTable+constructor+hangs and saw
> a very old thread with this subject.
>
> Cheers
>
> On Tue, Apr 28, 2015 at 7:12 PM, tridib <tridib.sama...@live.com> wrote:
>
> I am exactly having same issue. I am running hbase and spark in docker
> container.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/HBase-HTable-constructor-hangs-tp4926p22696.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
>

Reply via email to