Hello everybody, I would like to use bulkPut, bulkGet.. and SparkSQL with HBase.
I already read the documentation : HBase and Spark <https://hbase.apache.org/book.html#spark>. I also saw the example on github : hbase/hbase-spark/src/main/scala/org/apache/hbase/spark/example <https://github.com/apache/hbase/tree/master/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/example> . I tried bulkGet and to read from HBase with SparkSQL, but I couldn't make it work. Did someone succeed to make it work? If so can you give me some tips and the versions of Scala, Spark, HBase and Hadoop that you used? My versions : - Spark: 1.5.2 - HBase: 1.1.2 - Scala: 2.11.4 - Hadoop : 2.7.1 Extract of my build.sbt : libraryDependencies ++= Seq( "org.apache.hbase" % "hbase-server" % "1.1.2" excludeAll ExclusionRule(organization = "org.mortbay.jetty"), "org.apache.hbase" % "hbase-common" % "1.1.2" excludeAll ExclusionRule(organization = "javax.servlet"), "org.apache.spark" %% "spark-core" % "1.5.2", "org.apache.spark" %% "spark-sql" % "1.5.2", "org.apache.hbase" % "hbase-spark" % "2.0.0-SNAPSHOT" ) Best regards, Germain Tanguy.
