We are on Hortonworks 2.5 and very soon upgrading to 2.6. Spark version 1.6.2.

We have large volume of data that we bulk load to HBase using import tsv. Map 
Reduce job is very slow and looking for options we can use spark to improve 
performance. Please let me know if this can be optimized with spark and what 
packages or libs can be used.

PM

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to