I could, but the objective is to have minimal changes on the user level.
Luís Alves
2018-03-28 16:45 GMT+01:00 Piotr Nowojski :
> Yes, they are using randomEmit, so if you didn’t add this randomised
> records dropping in it, my remark would be invalid (and consistent with
> what Chesnay wrote).
Yes, they are using randomEmit, so if you didn’t add this randomised records
dropping in it, my remark would be invalid (and consistent with what Chesnay
wrote).
Besides questions asked by Chesnay, wouldn’t it be safer to implement records
shedding on a user level in a form of randomly filterin
Edward Rojas created FLINK-9103:
---
Summary: SSL verification on TaskManager when parallelism > 1
Key: FLINK-9103
URL: https://issues.apache.org/jira/browse/FLINK-9103
Project: Flink
Issue Type:
@Chesnay I tried both approaches, using the latency metric and manually by
adding a timestamp to each record.
@Piotr I can try to do the random drops in the RecordWriterOutput, but
don't the latency markers use the randomEmit method instead of emit?
2018-03-28 14:26 GMT+01:00 Chesnay Schepler :
>
My first instinct were latency markers as well, but AFAIK latency
markers are self-contained; they contain the start timestamp from the
source and we just measure the diff at each task. Thus, if the marker is
dropped it shouldn't be visible in increased latency metrics, but they
should just not
Hi,
If you have modified RecordWriter#randomEmit then maybe (probably?) the reason
is that you are accidentally skipping LatencyMarkers along side records. You
can track the code path of emitting LatencyMarkers from
Output#emitLatencyMarker.
I haven’t thought that through, but maybe you should
Sihua Zhou created FLINK-9102:
-
Summary: Make the JobGraph disable queued scheduling for
Flip6LocalStreamEnvironment
Key: FLINK-9102
URL: https://issues.apache.org/jira/browse/FLINK-9102
Project: Flink
Hi,
the SequenceFileWriter and the AvroKeyValueSinkWriter both support
compressed outputs. Apart from that, I'm not aware of any other Writers
which support compression. Maybe you could use these two Writers as a
guiding example. Alternatively, you could try to extend the
StreamWriterBase and wrap
Hi,
As part of a project that I'm developing, I'm extending Flink 1.2 to
support load shedding. I'm doing some performance tests to check the
performance impact of my changes compared to Flink 1.2 release.
>From the results that I'm getting, I can see that load shedding is working
and that incomi
Chesnay Schepler created FLINK-9101:
---
Summary: HAQueryableStateRocksDBBackendITCase failed on travis
Key: FLINK-9101
URL: https://issues.apache.org/jira/browse/FLINK-9101
Project: Flink
Iss
10 matches
Mail list logo