rker.
> >>>
> >>> I haven’t thought that through, but maybe you should implement your
> >>> record shedding on a level of RecordWriterOutput (or all
> implementations of
> >>> the org.apache.flink.streaming.api.operators.Output?), because it’s
> >
en’t thought that through, but maybe you should implement your
>>> record shedding on a level of RecordWriterOutput (or all implementations of
>>> the org.apache.flink.streaming.api.operators.Output?), because it’s
>>> easier there to differentiate between normal record
ut?), because it’s
>> easier there to differentiate between normal records and LatencyMarkers.
>>
>> Piotrek
>>
>> On 28 Mar 2018, at 11:44, Luis Alves wrote:
>>>
>>> Hi,
>>>
>>> As part of a project that I'm developing, I
wrote:
Hi,
As part of a project that I'm developing, I'm extending Flink 1.2 to
support load shedding. I'm doing some performance tests to check the
performance impact of my changes compared to Flink 1.2 release.
From the results that I'm getting, I can see that load sheddin
ote:
>
> Hi,
>
> As part of a project that I'm developing, I'm extending Flink 1.2 to
> support load shedding. I'm doing some performance tests to check the
> performance impact of my changes compared to Flink 1.2 release.
>
> From the results that I'm getting
Hi,
As part of a project that I'm developing, I'm extending Flink 1.2 to
support load shedding. I'm doing some performance tests to check the
performance impact of my changes compared to Flink 1.2 release.
>From the results that I'm getting, I can see that load shedd
Robert Metzger created FLINK-3264:
-
Summary: Add load shedding policy into Kafka Consumers
Key: FLINK-3264
URL: https://issues.apache.org/jira/browse/FLINK-3264
Project: Flink
Issue Type