Hello Madvi,

Some work has been done by @pomadchin using the spark notebook, maybe you
should come on https://gitter.im/andypetrella/spark-notebook and poke him?
There are some discoveries he made that might be helpful to know.

Also you can poke @lossyrob from Azavea, he did that for geotrellis

my0.2c
andy


On Tue, Apr 21, 2015 at 9:25 AM Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> You can simply use a custom inputformat (AccumuloInputFormat) with the
> hadoop RDDs (sc.newApiHadoopFile etc) for that, all you need to do is to
> pass the jobConfs. Here's pretty clean discussion:
> http://stackoverflow.com/questions/29244530/how-do-i-create-a-spark-rdd-from-accumulo-1-6-in-spark-notebook#answers-header
>
> Thanks
> Best Regards
>
> On Tue, Apr 21, 2015 at 9:55 AM, madhvi <madhvi.gu...@orkash.com> wrote:
>
>> Hi all,
>>
>> Is there anything to integrate spark with accumulo or make spark to
>> process over accumulo data?
>>
>> Thanks
>> Madhvi Gupta
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to