Github user suez1224 commented on the issue:
https://github.com/apache/flink/pull/6201
@twalthr @fhueske sounds good to me. We can do that in a follow-up issue
for `from-source`, and we will not support `from-source` in this PR.
---
Github user twalthr commented on the issue:
https://github.com/apache/flink/pull/6201
I agree with @fhueske. Let's do the `from-source` in a follow-up issue. I
will open a PR soon for FLINK-8558 which separates connector and format. For
this I also introduced a method `KafkaTableSourc
Github user fhueske commented on the issue:
https://github.com/apache/flink/pull/6201
Hi @suez1224, that sounds good overall. :-)
A few comments:
- I would not add a user-facing property `connector.support-timestamp`
because a user chooses that by choosing the connect
Github user suez1224 commented on the issue:
https://github.com/apache/flink/pull/6201
@fhueske @twalthr thanks for the comments. In `from-source`, the only
system i know of is Kafka10 or Kafka11, which support writing record along with
timestamp. To support `from-source` in table sin
Github user fhueske commented on the issue:
https://github.com/apache/flink/pull/6201
Hi, I think timestamp fields of source-sink tables should be handled as
follows when emitting the table:
- `proc-time`: ignore
- `from-field`: simply write out the timestamp as part of the row
Github user twalthr commented on the issue:
https://github.com/apache/flink/pull/6201
@suez1224 Yes sounds good to me. Only `from-field` timestamps matter right
now.
We should also think of the opposite of a timestamps extractor (timestamp
inserter) for cases where the rowti
Github user suez1224 commented on the issue:
https://github.com/apache/flink/pull/6201
@twalthr , for sink only table, I dont think the user need to define any
rowtimes on it, since it will never use as a source. For table as both source
and sink, when registering it as sink, I think