Hi,guys!

I think it should include two aspects. Write and read.
1. Write data from Flink to carbondata
2. Extract data from carbondata and write to Flink table.

If you only think about how to write carbondata, you'll just need to
implement write

Flink soure and sink need to be customized, and carbondata API
implementation needs to be called.
http://carbondata.apache.org/sdk-guide.html

Flink customized the reference links of source and sink as follows:
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sourceSinks.html

Type mapping of flink table and carbondata also needs to be considered.
Flink table data type:
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tableApi.html#data-types

carbondata
http://carbondata.apache.org/supported-data-types-in-carbondata.html

Table format support
Carbondata table format: CSV, json, text, parquet, kafka, socket
http://carbondata.apache.org/streaming-guide.html
Flink table format: CSV,json,avro
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connect.html#table-formats



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/

Reply via email to