Error in using hbase-spark connector

2020-03-11 Thread PRAKASH GOPALSAMY
Hi Team, We are trying to read hbase table from spark using hbase-spark connector. But our job is failing in the pushdown part of the filter in stage 0, due the below error. kindly help us to resolve this issue. caused by : java.lang.NoClassDefFoundError: scala/collection/immutable/StringOps

hbase + spark + hdfs

2017-05-08 Thread mathieu ferlay
this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/hbase-spark-hdfs-tp28661.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe e-mail: user-unsubscr...@spark.apache.org

hbase + spark + hdfs

2017-05-05 Thread mathieu ferlay
Hi everybody. I'm totally new in Spark and I wanna know one stuff that I do not manage to find. I have a full ambary install with hbase, Hadoop and spark. My code reads and writes in hdfs via hbase. Thus, as I understood, all data stored are in bytes format in hdfs. Now, I know that it's possible

Re: HBase Spark

2017-02-03 Thread Benjamin Kim
gt;> org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:51) >> at >> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158) >> at org.apache.spark.sql.DataFrameReader.loa

Re: HBase Spark

2017-02-03 Thread Asher Krim
ptions(Map(HBaseTableCatalog.tableCatalog->cat)) >>>> .format("org.apache.spark.sql.execution.datasources.hbase") >>>> .load() >>>> } >>>> >>>> val df = withCatalog(cat) >>>> df.show >>>> >>>> &g

Re: HBase Spark

2017-02-03 Thread Benjamin Kim
gt; >>> java.lang.NoSuchMethodError: >>> scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef; >>> at >>> org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:232) >>>

Re: HBase Spark

2017-02-03 Thread Asher Krim
>>> It gives me this error. >>> >>> java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create >>> (Ljava/lang/Object;)Lscala/runtime/ObjectRef; >>> at org.apache.spark.sql.execution.datasources.hbase.HBaseTableC >>> atalog$.apply(HBaseTableCata

Re: HBase Spark

2017-02-03 Thread Benjamin Kim
>> at >> org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.(HBaseRelation.scala:77) >> at >> org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:51) >> at >> org.apache.spark.sq

Re: HBase Spark

2017-02-03 Thread Benjamin Kim
gt; org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.(HBaseRelation.scala:77) > at > org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:51) > at > org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedD

Re: HBase Spark

2017-02-03 Thread Asher Krim
elation.scala:77) >> at org.apache.spark.sql.execution.datasources.hbase.DefaultSour >> ce.createRelation(HBaseRelation.scala:51) >> at org.apache.spark.sql.execution.datasources.ResolvedDataSourc >> e$.apply(ResolvedDataSource.scala:158) >> at org.apache.sp

Re: HBase Spark

2017-02-02 Thread Benjamin Kim
.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158) > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119) > > If you can please help, I would be grateful. > > Cheers, &g

Re: HBase Spark

2017-02-02 Thread Asher Krim
) > at org.apache.spark.sql.execution.datasources.hbase. > DefaultSource.createRelation(HBaseRelation.scala:51) > at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply( > ResolvedDataSource.scala:158) > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.s

Re: HBase Spark

2017-02-02 Thread Benjamin Kim
ltSource.createRelation(HBaseRelation.scala:51) at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119) If you can please help, I would be grateful. Cheers, Ben >

Re: HBase Spark

2017-01-31 Thread Benjamin Kim
Elek, If I cannot use the HBase Spark module, then I’ll give it a try. Thanks, Ben > On Jan 31, 2017, at 1:02 PM, Marton, Elek <h...@anzix.net> wrote: > > > I tested this one with hbase 1.2.4: > > https://github.com/hortonworks-spark/shc > > Marton > >

Re: HBase Spark

2017-01-31 Thread Marton, Elek
I tested this one with hbase 1.2.4: https://github.com/hortonworks-spark/shc Marton On 01/31/2017 09:17 PM, Benjamin Kim wrote: Does anyone know how to backport the HBase Spark module to HBase 1.2.0? I tried to build it from source, but I cannot get it to work. Thanks, Ben

HBase Spark

2017-01-31 Thread Benjamin Kim
Does anyone know how to backport the HBase Spark module to HBase 1.2.0? I tried to build it from source, but I cannot get it to work. Thanks, Ben - To unsubscribe e-mail: user-unsubscr...@spark.apache.org

RE: HBase-Spark Module

2016-07-29 Thread David Newberger
Hi Ben, This seems more like a question for community.cloudera.com. However, it would be in hbase not spark I believe. https://repository.cloudera.com/artifactory/webapp/#/artifacts/browse/tree/General/cloudera-release-repo/org/apache/hbase/hbase-spark David Newberger -Original Message

HBase-Spark Module

2016-07-29 Thread Benjamin Kim
I would like to know if anyone has tried using the hbase-spark module? I tried to follow the examples in conjunction with CDH 5.8.0. I cannot find the HBaseTableCatalog class in the module or in any of the Spark jars. Can someone help? Thanks, Ben

Re: HBase / Spark Kerberos problem

2016-05-19 Thread Arun Natva
in/scala/org/apache/spark/deploy/yarn/Client.scala > [2] > https://github.com/apache/spark/blob/master/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala > > > From: John Trengrove [mailto:john.trengr...@servian.com.au] > Sent: 19 May 2016 08:09 > To

RE: HBase / Spark Kerberos problem

2016-05-19 Thread philipp.meyerhoefer
to credentials” and the .count() on my HBase RDD works fine. From: Ellis, Tom (Financial Markets IT) [mailto:tom.el...@lloydsbanking.com] Sent: 19 May 2016 09:51 To: 'John Trengrove'; Meyerhoefer, Philipp (TR Technology & Ops) Cc: user Subject: RE: HBase / Spark Kerberos problem Yeah we ran in

RE: HBase / Spark Kerberos problem

2016-05-19 Thread Ellis, Tom (Financial Markets IT)
ngrove [mailto:john.trengr...@servian.com.au] Sent: 19 May 2016 08:09 To: philipp.meyerhoe...@thomsonreuters.com Cc: user Subject: Re: HBase / Spark Kerberos problem -- This email has reached the Bank via an external source -- Have you had a look at this issue? https://issues.apache.org/jira/browse/SPARK

Re: HBase / Spark Kerberos problem

2016-05-19 Thread John Trengrove
Have you had a look at this issue? https://issues.apache.org/jira/browse/SPARK-12279 There is a comment by Y Bodnar on how they successfully got Kerberos and HBase working. 2016-05-18 18:13 GMT+10:00 : > Hi all, > > I have been puzzling over a Kerberos

HBase / Spark Kerberos problem

2016-05-19 Thread philipp.meyerhoefer
recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website.<http://site.thomsonreuters.com/site/disclosures/> -- View this message in context: http://apache-spark-user-

HBase / Spark Kerberos problem

2016-05-18 Thread philipp.meyerhoefer
Hi all, I have been puzzling over a Kerberos problem for a while now and wondered if anyone can help. For spark-submit, I specify --keytab x --principal y, which creates my SparkContext fine. Connections to Zookeeper Quorum to find the HBase master work well too. But when it comes to a

HBase Spark Module

2016-04-20 Thread Benjamin Kim
I see that the new CDH 5.7 has been release with the HBase Spark module built-in. I was wondering if I could just download it and use the hbase-spark jar file for CDH 5.5. Has anyone tried this yet? Thanks, Ben - To unsubscribe

Re: HBase Spark Streaming giving error after restore

2015-10-17 Thread Amit Hora
peReference[HashMap[String, String]]() {}); >> >> >> var ts:Long= maprecord.get("ts").toLong >> var tweetID:Long= maprecord.get("id").toLong >> val key=ts+"_"+tweetID; >> val put=new Put(Bytes.toBytes(key)) >>

Re: HBase Spark Streaming giving error after restore

2015-10-17 Thread Aniket Bhatnagar
ass", >>> classOf[TableOutputFormat[String]], classOf[OutputFormat[String, >>> BoxedUnit]]) >>> >>> rdd.map ( record =>(new ImmutableBytesWritable,{ >>> >>> >>> var maprecord = new HashMap[String, String]; >>>

HBase Spark Streaming giving error after restore

2015-10-16 Thread Amit Singh Hora
e),Bytes.toBytes(kv._1),Bytes.toBytes(kv._2)) } ) put } ) ).saveAsNewAPIHadoopDataset(hconf) }) help me out in solving this as it is urgent for me -- View this message in context: http:/

HBase Spark Streaming giving error after restore

2015-10-16 Thread Amit Singh Hora
).saveAsNewAPIHadoopDataset(hconf) }) help me out in solving this as it is urgent for me -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/HBase-Spark-Streaming-giving-error-after-restore-tp25090.html Sent from the Apache Spark

Re: HBase Spark Streaming giving error after restore

2015-10-16 Thread Ted Yu
.foreach(kv => { > // println(kv._1+" - "+kv._2) > > > put.add(Bytes.toBytes(colfamily.value),Bytes.toBytes(kv._1),Bytes.toBytes(kv._2)) > > > } >) >put > >

Re: Hbase Spark streaming issue.

2015-09-24 Thread Shixiong Zhu
Looks like you have an incompatible hbase-default.xml in some place. You can use the following code to find the location of "hbase-default.xml" println(Thread.currentThread().getContextClassLoader().getResource("hbase-default.xml")) Best Regards, Shixiong Zhu 2015-09-21 15:46 GMT+08:00 Siva

Hbase Spark streaming issue.

2015-09-21 Thread Siva
Hi, I m seeing some strange error while inserting data from spark streaming to hbase. I can able to write the data from spark (without streaming) to hbase successfully, but when i use the same code to write dstream I m seeing the below error. I tried setting the below parameters, still didnt