Hi, Have you checked this? http://mail-archives.apache.org/mod_mbox/spark-user/201311.mbox/%3ccacyzca3askwd-tujhqi1805bn7sctguaoruhd5xtxcsul1a...@mail.gmail.com%3E
// maropu On Wed, May 18, 2016 at 1:14 PM, Mohanraj Ragupathiraj < mohanaug...@gmail.com> wrote: > I have 100 million records to be inserted to a HBase table (PHOENIX) as a > result of a Spark Job. I would like to know if i convert it to a Dataframe > and save it, will it do Bulk load (or) it is not the efficient way to write > data to a HBase table > > -- > Thanks and Regards > Mohan > -- --- Takeshi Yamamuro