You can plug in the native hadoop input formats with Spark's
sc.newApiHadoopFile etc which takes in the inputformat.

Thanks
Best Regards

On Thu, Apr 16, 2015 at 10:15 PM, Shushant Arora <shushantaror...@gmail.com>
wrote:

> Is it for spark?
>
> On Thu, Apr 16, 2015 at 10:05 PM, Akhil Das <ak...@sigmoidanalytics.com>
> wrote:
>
>> You can simply override the isSplitable method in your custom inputformat
>> class and make it return false.
>>
>> Here's a sample code snippet:
>>
>>
>> http://stackoverflow.com/questions/17875277/reading-file-as-single-record-in-hadoop#answers-header
>>
>>
>>
>> Thanks
>> Best Regards
>>
>> On Thu, Apr 16, 2015 at 4:18 PM, Shushant Arora <
>> shushantaror...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> How to specify custom input format in spark and control isSplitable in
>>> between file.
>>> Need to read a file from HDFS  , file format is custom and requirement
>>> is file should not be split inbetween when a executor node gets that
>>> partition of input dir.
>>>
>>> Can anyone share a sample in java.
>>>
>>> Thanks
>>> Shushant
>>>
>>
>>
>

Reply via email to