Hi,
Im fetching data from kafka topics converting them to chunks of <= 1MB and
sinking them to a kinesis data stream.
The streaming job is functional however I see bursts of data in kinesis
stream with intermittent dips where data received is 0. I'm attaching the
configuration parameters for kinesi
Hi,
I have been running a streaming job which prints data to .out files the
size of the file has gotten really large and is choking the root memory for
my VM. Is it ok to delete the .out files? Would that affect any other
operation or functionality?
Hi, Tejas
This exception is caused by the Rule adding fields that cannot be recovered
from the historical Checkpoint. You can try to start job without recover from
checkpoint/savepoint.
And I double-checked that Rule as you write is recognized as a Pojo type
Best,
Weihua
> 2022年5月13日 上午7:31,