Hi,
I created a table in Phoenix with three column families and Inserted the
values as shown below
Syntax :
> CREATE TABLE TESTCF (MYKEY VARCHAR NOT NULL PRIMARY KEY, CF1.COL1 VARCHAR,
> CF2.COL2 VARCHAR, CF3.COL3 VARCHAR)
> UPSERT INTO TESTCF (MYKEY,CF1.COL1,CF2.COL2,CF3.COL3)values
> ('Key2','
Apart from Phoenix Spark connector. You can also have a look at:
https://github.com/Huawei-Spark/Spark-SQL-on-HBase
On Wed, Mar 9, 2016 at 4:58 PM, Divya Gehlot
wrote:
> I agree with Talat
> As couldn't connect directly with Hbase
> Connecting it through Phoenix .
> If you are using Hortonworks
Hi Rachana!
For help with vendor provided add-ons, please use the given vendor's
support mechanism.
For things in Cloudera Labs, your best best for a starting place is:
http://community.cloudera.com/t5/Cloudera-Labs/bd-p/ClouderaLabs
On Thu, Mar 10, 2016 at 6:36 AM, Rachana Srivastava <
racha
Have you looked and tried this?
https://hbase.apache.org/book.html#_spark_streaming
It doesn't work for you?
JMS
2016-03-10 9:36 GMT-05:00 Rachana Srivastava <
rachanasrivas...@yahoo.com.invalid>:
> Hello all,
> I am trying to integrate HBase with SparkStreaming new APIs mentioned here
> http:/
Hello all,
I am trying to integrate HBase with SparkStreaming new APIs mentioned here
http://blog.cloudera.com/blog/2014/12/new-in-cloudera-labs-sparkonhbase/
I am using JavaHBaseContext hbaseContext = new
JavaHBaseContext(jssc.sparkContext(), conf); Then called bulk Get API
hbaseContext.stre
Hello,
the log you pasted is from later time when I was playing with all kinds of
knobs to make things work.
Here are logs from freshly deployed cluster:
http://michal.medvecky.net/hbase/hbase-master.log
http://michal.medvecky.net/hbase/hbase-regionserver.log
http://michal.medvecky.net/status.jp