Also, it helps if you post us logs, stacktraces, exceptions, etc.

TD


On Tue, Jul 15, 2014 at 10:07 AM, Jerry Lam <chiling...@gmail.com> wrote:

> Hi Rajesh,
>
> I have a feeling that this is not directly related to spark but I might be
> wrong. The reason why is that when you do:
>
>    Configuration configuration = HBaseConfiguration.create();
>
> by default, it reads the configuration files hbase-site.xml in your
> classpath and ... (I don't remember all the configuration files hbase has).
>
> I noticed that you overwrote some configuration settings in the code but
> I'm not if you have other configurations that might have conflicted with
> those.
>
> Could you try the following, remove anything that is spark specific
> leaving only hbase related codes. uber jar it and run it just like any
> other simple java program. If you still have connection issues, then at
> least you know the problem is from the configurations.
>
> HTH,
>
> Jerry
>
>
> On Tue, Jul 15, 2014 at 12:10 PM, Krishna Sankar <ksanka...@gmail.com>
> wrote:
>
>> One vector to check is the HBase libraries in the --jars as in :
>> spark-submit --class <your class> --master <master url> --jars
>> hbase-client-0.98.3-hadoop2.jar,commons-csv-1.0-SNAPSHOT.jar,hbase-common-0.98.3-hadoop2.jar,hbase-hadoop2-compat-0.98.3-hadoop2.jar,hbase-it-0.98.3-hadoop2.jar,hbase-protocol-0.98.3-hadoop2.jar,hbase-server-0.98.3-hadoop2.jar,htrace-core-2.04.jar,spark-assembly-1.0.0-hadoop2.2.0.jar
>> badwclient.jar
>> This worked for us.
>> Cheers
>> <k/>
>>
>>
>> On Tue, Jul 15, 2014 at 6:47 AM, Madabhattula Rajesh Kumar <
>> mrajaf...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> Could you please help me to resolve the issue.
>>>
>>> *Issue *: I'm not able to connect HBase from Spark-submit. Below is my
>>> code.  When i execute below program in standalone, i'm able to connect to
>>> Hbase and doing the operation.
>>>
>>> When i execute below program using spark submit ( ./bin/spark-submit )
>>> command, i'm not able to connect to hbase. Am i missing any thing?
>>>
>>>
>>> import java.util.HashMap;
>>> import java.util.List;
>>> import java.util.Map;
>>> import java.util.Properties;
>>>
>>> import org.apache.hadoop.conf.Configuration;
>>> import org.apache.hadoop.hbase.HBaseConfiguration;
>>> import org.apache.hadoop.hbase.client.Put;
>>> import org.apache.log4j.Logger;
>>> import org.apache.spark.SparkConf;
>>> import org.apache.spark.api.java.JavaRDD;
>>> import org.apache.spark.api.java.function.Function;
>>> import org.apache.spark.streaming.Duration;
>>> import org.apache.spark.streaming.api.java.JavaDStream;
>>> import org.apache.spark.streaming.api.java.JavaStreamingContext;
>>> import org.apache.hadoop.hbase.HTableDescriptor;
>>> import org.apache.hadoop.hbase.client.HBaseAdmin;
>>>
>>> public class Test {
>>>
>>>
>>>     public static void main(String[] args) throws Exception {
>>>
>>>         JavaStreamingContext ssc = new
>>> JavaStreamingContext("local","Test", new Duration(40000), sparkHome, "");
>>>
>>>         JavaDStream<String> lines_2 = ssc.textFileStream(hdfsfolderpath);
>>>
>>>         Configuration configuration = HBaseConfiguration.create();
>>>         configuration.set("hbase.zookeeper.property.clientPort", "2181");
>>>         configuration.set("hbase.zookeeper.quorum", "localhost");
>>>         configuration.set("hbase.master", "localhost:600000");
>>>
>>>         HBaseAdmin hBaseAdmin = new HBaseAdmin(configuration);
>>>
>>>             if (hBaseAdmin.tableExists("HABSE_TABLE")) {
>>>                 System.out.println(" ANA_DATA table exists ......");
>>>             }
>>>
>>>         System.out.println(" HELLO HELLO HELLO ");
>>>
>>>         ssc.start();
>>>         ssc.awaitTermination();
>>>
>>>     }
>>> }
>>>
>>> Thank you for your help and support.
>>>
>>> Regards,
>>> Rajesh
>>>
>>
>>
>

Reply via email to