Hi,
I haven't seen this error before. Also, I didn't find anything helpful
searching for the error on Google.

Did you check the GC times also for Flink? Is your Flink job doing any
heavy tasks (like maintaining large windows, or other operations involving
a lot of heap space?)

Regards,
Robert


On Tue, Oct 11, 2016 at 10:51 AM, static-max <flasha...@googlemail.com>
wrote:

> Hi,
>
> I have a low throughput job (approx. 1000 messager per Minute), that
> consumes from Kafka und writes directly to HDFS. After an hour or so, I get
> the following warnings in the Task Manager log:
>
> 2016-10-10 01:59:44,635 WARN  org.apache.hadoop.hdfs.DFSClient
>                    - Slow ReadProcessor read fields took 30001ms
> (threshold=30000ms); ack: seqno: 66 reply: SUCCESS reply: SUCCESS reply:
> SUCCESS downstreamAckTimeNanos: 1599276 flag: 0 flag: 0 flag: 0, targets:
> [DatanodeInfoWithStorage[Node1, Node2, Node3]]
> 2016-10-10 02:04:44,635 WARN  org.apache.hadoop.hdfs.DFSClient
>                    - Slow ReadProcessor read fields took 30002ms
> (threshold=30000ms); ack: seqno: 13 reply: SUCCESS reply: SUCCESS reply:
> SUCCESS downstreamAckTimeNanos: 2394027 flag: 0 flag: 0 flag: 0, targets:
> [DatanodeInfoWithStorage[Node1, Node2, Node3]]
> 2016-10-10 02:05:14,635 WARN  org.apache.hadoop.hdfs.DFSClient
>                    - Slow ReadProcessor read fields took 30001ms
> (threshold=30000ms); ack: seqno: 17 reply: SUCCESS reply: SUCCESS reply:
> SUCCESS downstreamAckTimeNanos: 2547467 flag: 0 flag: 0 flag: 0, targets:
> [DatanodeInfoWithStorage[Node1, Node2, Node3]]
>
> I have not found any erros or warning at the datanodes or the namenode.
> Every other application using HDFS performs fine. I have very little load
> and network latency is fine also. I also checked GC, disk I/O.
>
> The files written are very small (only a few MB), so writing the blocks
> should be fast.
>
> The threshold is crossed only 1 or 2 ms, this makes me wonder.
>
> Does anyone have an Idea where to look next or how to fix these warnings?
>

Reply via email to