Hi Tathagata,

Yes, I'm pretty sure there are no errors in driver logs and workers logs.
The streaming UI appears to be showing that job is running fine. I can see
the tasks are being completed. I also, see the receiver is picking up new
messages (in UI). I'm running this same job twice that read two different
type of messages (in JSON form). However, this problem only occurs on one
type of messages. The other message never encountered this issue, so I
suspect the problem is related to data (malformed JSON). However, the
parsing is being handled so it should catch the error but I don't see any
sort. Did you ever come across any incidents that if the receiver receives
any suspicious data before parsing the data causing the job to get lost in
a loop? But again I should see Errors but I don't, which is why it is
making it difficult to debug.

Cheers,

Uthay.

On 12 October 2015 at 21:02, Tathagata Das <t...@databricks.com> wrote:

> Are you sure that there are not log4j errors in the driver logs? What if
> you try enabling debug level? And what does the streaming UI say?
>
>
> On Mon, Oct 12, 2015 at 12:50 PM, Uthayan Suthakar <
> uthayan.sutha...@gmail.com> wrote:
>
>> Any suggestions? Is there anyway that I could debug this issue?
>>
>> Cheers,
>>
>> Uthay
>>
>> On 11 October 2015 at 18:39, Uthayan Suthakar <uthayan.sutha...@gmail.com
>> > wrote:
>>
>>> Hello all,
>>>
>>> I have a Spark Streaming job that run and produce results successfully.
>>> However, after a few days the job stop producing any output. I can see the
>>> job is still running ( polling data from Flume, completing jobs and it's
>>> subtasks) however, it is failing to produce any output. I have to restart
>>> the Stream job to start processing again. I've checked the log file, and
>>> there are no errors or exceptions. It appears that everything is running
>>> smoothly except producing any results. Has anyone come across this issue?
>>>
>>> Spark version is: 1.3.0
>>>
>>> Cheers,
>>>
>>> Uthay
>>>
>>
>>
>

Reply via email to