Actually, Hadoop InputFormats can still be used to read and write from
"file://", "s3n://", and similar schemes. You just won't be able to
read/write to HDFS without installing Hadoop and setting up an HDFS cluster.

To summarize: Sourav, you can use any of the prebuilt packages (i.e.
anything other than "source code").

Hope that helps,
-Jey

On Mon, Jun 29, 2015 at 7:33 AM, ayan guha <guha.a...@gmail.com> wrote:

> Hi
>
> You really donot need hadoop installation. You can dowsload a pre-built
> version with any hadoop and unzip it and you are good to go. Yes it may
> complain while launching master and workers, safely ignore them. The only
> problem is while writing to a directory. Of course you will not be able to
> use any hadoop inputformat etc. out of the box.
>
> ** I am assuming its a learning question :) For production, I would
> suggest build it from source.
>
> If you are using python and need some help, please drop me a note off line.
>
> Best
> Ayan
>
> On Tue, Jun 30, 2015 at 12:24 AM, Sourav Mazumder <
> sourav.mazumde...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm trying to run Spark without Hadoop where the data would be read and
>> written to local disk.
>>
>> For this I have few Questions -
>>
>> 1. Which download I need to use ? In the download option I don't see any
>> binary download which does not need Hadoop. Is the only way to do this to
>> download the source code version and compile the same ?
>>
>> 2. Which installation/quick start guideline I should use for the same. So
>> far I didn't see any documentation which specifically addresses the Spark
>> without Hadoop installation/setup unless I'm missing out one.
>>
>> Regards,
>> Sourav
>>
>
>
>
> --
> Best Regards,
> Ayan Guha
>

Reply via email to