Thanks Ben
The thing is I am using Spark 2 and no stack from CDH!
Is this approach to reading/writing to Hbase specific to Cloudera?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Mich,
I know up until CDH 5.4 we had to add the HTrace jar to the classpath to make
it work using the command below. But after upgrading to CDH 5.7, it became
unnecessary.
echo "/opt/cloudera/parcels/CDH/jars/htrace-core-3.2.0-incubating.jar" >>
/etc/spark/conf/classpath.txt
Hope this helps.
Trying bulk load using Hfiles in Spark as below example:
import org.apache.spark._
import org.apache.spark.rdd.NewHadoopRDD
import org.apache.hadoop.hbase.{HBaseConfiguration, HTableDescriptor}
import org.apache.hadoop.hbase.client.HBaseAdmin
import