Hi

Good idea, thank you started this discussion.

Agree with Ravi comments, we need to double-check some limitations after
introducing the feature.

Flink and Kafka integration can be discussed later. 
For using SDK to write new data to the existing carbondata table , some
questions:
1.How to ensure to create the same index, dictionary... policy as per the
existing table?
2.Can you please help me to understand this proposal further : what valued
scenarios require this feature?

------------------------------------------------------------------------------------------------
After having online segment, one can use this feature to implement
ApacheFlink-CarbonData integration, or Apache
KafkaStream-CarbonDataintegration, or just using SDK to write new data to
existing CarbonData table,the integration level can be the same as current
Spark-CarbonDataintegration.

Regards
Liang



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/

Reply via email to