Fwd: FLINK TABLE API UDAF QUESTION, For heap backends, the new state serializer must not be incompatible

2019-08-26 Thread orlando qi
-- Forwarded message - 发件人: orlando qi Date: 2019年8月23日周五 上午10:44 Subject: FLINK TABLE API UDAF QUESTION, For heap backends, the new state serializer must not be incompatible To: Hello everyone: I defined a UDAF function when I am using the FLINK TABLE API to achieve

Re: FLINK TABLE API 自定义聚合函数UDAF从check point恢复任务报状态序列化不兼容问题( For heap backends, the new state serializer must not be incompatible)

2019-08-26 Thread orlando qi
"stream_deducting_reason" -> x.deductingReason, "stream_repair_method" -> x.repairMethod, "stream_evaluation_time_millis" -> x.evaluationTimeMillis.toString) }) Jark Wu 于2019年8月26日周一 下午12:18写道: > Hi Qi, > > 你在checkpoint 恢复任务前,有修改过 UDAF 的 ACC 结构吗? > > 另外,如果可以的话,最好也能带上 UDAF 的实现,和 SQL。 > > Best, > Jark > > > 在 2019年8月23日,11:08,orlando qi 写道: > > > > > > at > > -- 祁豪兵

FLINK TABLE API 自定义聚合函数UDAF从check point恢复任务报状态序列化不兼容问题( For heap backends, the new state serializer must not be incompatible)

2019-08-22 Thread orlando qi
大家好: 我在使用flink table api 实现group by 聚合操作的时候,自定义了一个UDAF函数,首次在集群上运行的时候是正确的,但是从check point恢复任务的时候报下面错误,但是使用内置的函数就没问题,不知道我该怎么解决呢? java.lang.RuntimeException: Error while getting state at org.apache.flink.runtime.state.DefaultKeyedStateStore.getState(DefaultKeyedStateStore.java:62) at

FLINK TABLE API UDAF QUESTION, For heap backends, the new state serializer must not be incompatible

2019-08-22 Thread orlando qi
Hello everyone: I defined a UDAF function when I am using the FLINK TABLE API to achieve the aggregation operation. There is no problem with the task running from beginning in cluster. But it throws an exception when it is restart task from checkpoint,How can I resolve it ?