Yes, I have faced above mentioned problems after following that. Please,
reread my question.

On Wed, Feb 10, 2016 at 1:37 AM, Artem Ervits <artemerv...@gmail.com> wrote:

> does this help?
> https://github.com/apache/storm/tree/master/external/storm-hdfs
>
> On Tue, Feb 9, 2016 at 1:44 AM, K Zharas <kgzha...@gmail.com> wrote:
>
>> That is all about "HdfsBolt", not "HdfsSpout"
>>
>> On Tue, Feb 9, 2016 at 7:09 AM, Artem Ervits <artemerv...@gmail.com>
>> wrote:
>>
>>> Here is some info
>>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_storm-user-guide/content/writing-data-with-storm-hdfs-connector.html
>>> On Feb 8, 2016 7:53 AM, "K Zharas" <kgzha...@gmail.com> wrote:
>>>
>>>> Hi.
>>>>
>>>> 1) How can I set "HdfsSpout" so that it will emit a tuple every X
>>>> seconds?
>>>>           Is it done by "hdfsspout.commit.sec = 30"?
>>>> 2) I have only one file which has 300+ lines. After processing all the
>>>> line it does not move it to "done" directory, and it still has ".lock" &
>>>> ".inprogress". Also, it gives an error like "couldn't find the next file".
>>>>           What I want to do is to move the file into "done" directory,
>>>> once "HdfsSpout" emits all the lines and to kill topology (stop program).
>>>>
>>>> Thank you.
>>>>
>>>
>>
>>
>> --
>> Best regards,
>> Zharas
>>
>
>


-- 
Best regards,
Zharas

Reply via email to