What you call "sub-category" are packages pre-built to run on certain
Hadoop environments. It really depends on where you want to run Spark. As
far as I know, this is mainly about the included HDFS binding - so if you
just want to play around with Spark, any of the packages should be fine. I
wouldn't use source though, because you'd have to compile it yourself.

PS: Make sure to use "Reply to all". If you're not including the mailing
list in the response, I'm the only one who will get your message.

Regards,
Jeff

2015-03-18 10:49 GMT+01:00 James King <jakwebin...@gmail.com>:

> Any sub-category recommendations hadoop, MapR, CDH?
>
> On Wed, Mar 18, 2015 at 10:48 AM, James King <jakwebin...@gmail.com>
> wrote:
>
>> Many thanks Jeff will give it a go.
>>
>> On Wed, Mar 18, 2015 at 10:47 AM, Jeffrey Jedele <
>> jeffrey.jed...@gmail.com> wrote:
>>
>>> Probably 1.3.0 - it has some improvements in the included Kafka receiver
>>> for streaming.
>>>
>>> https://spark.apache.org/releases/spark-release-1-3-0.html
>>>
>>> Regards,
>>> Jeff
>>>
>>> 2015-03-18 10:38 GMT+01:00 James King <jakwebin...@gmail.com>:
>>>
>>>> Hi All,
>>>>
>>>> Which build of Spark is best when using Kafka?
>>>>
>>>> Regards
>>>> jk
>>>>
>>>
>>>
>>
>

Reply via email to