Anyone can share a little advice on the reason of this exception? I changed to 
use old planner, the same sql runs well. 











At 2020-02-13 16:07:18, "sunfulin" <sunfulin0...@163.com> wrote:

Hi, guys
When running the same Flink sql like the following, I met exception like 
"org.apache.flink.table.api.TableException: UpsertStreamTableSink requires that 
Table has a full primary keys if it is updated". I am using the latest Flink 
1.10 release with blink planner enabled. Because the same logic runs well 
within Flink 1.8.2 old planner. Does the SQL usage has some problem or may has 
a bug here ? 




INSERT INTO ES6_ZHANGLE_OUTPUT(aggId, pageId, ts, expoCnt, clkCnt)
  SELECT aggId, pageId, ts_min as ts,
  count(case when eventId = 'exposure' then 1 else null end) as expoCnt,
  count(case when eventId = 'click' then 1 else null end) as clickCnt
  FROM
  (
    SELECT
        'ZL_001' as aggId,
        pageId,
        eventId,
        recvTime,
        ts2Date(recvTime) as ts_min
    from kafka_zl_etrack_event_stream
    where eventId in ('exposure', 'click')
  ) as t1
  group by aggId, pageId, ts_min






 

Reply via email to