Hi Forideal,
luckily these problems will belong to the past in Flink 1.12 when UDAF
are updated to the new type system [1]. Lists will be natively supported
and registering custom KryoSerializers consistently as well.
Until then, another workaround is to override getAccumulatorType() and
def
Hi Robert Metzger,
I am very happy to share my code,
public class ConcatString {
public List list = new ArrayList<>();
public void add(String toString) {
if (list != null) {
if (list.size() < 100) {
list.add(toString);
}
}
}
}
> Are you registering
Hi Forideal,
When using RocksDB, we need to serialize the data (to store it on disk),
whereas when using the memory backend, the data (in this
case RedConcat.ConcatString instances) is on the heap, thus we won't run
into this issue.
Are you registering your custom types in the ExecutionConfig? (I
Hi
I wrote a UDAF referring to this article
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/functions/udfs.html#aggregation-functions,
when using in-memory state, the task can run normally. However, When I chose
rocksdb as the state backend, I encountere