You can simply use a custom inputformat (AccumuloInputFormat) with the
hadoop RDDs (sc.newApiHadoopFile etc) for that, all you need to do is to
pass the jobConfs. Here's pretty clean discussion:
http://stackoverflow.com/questions/29244530/how-do-i-create-a-spark-rdd-from-accumulo-1-6-in-spark-notebook#answers-header

Thanks
Best Regards

On Tue, Apr 21, 2015 at 9:55 AM, madhvi <madhvi.gu...@orkash.com> wrote:

> Hi all,
>
> Is there anything to integrate spark with accumulo or make spark to
> process over accumulo data?
>
> Thanks
> Madhvi Gupta
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to