> 在 2018年12月7日,下午11:05,ravipesala <ravi.pes...@gmail.com> 写道:
> 
> Hi Jacky,
> 
> Its a good idea to support writing transactional table from SDK. But we need
> to add following limitations as well
> 1. It can work on file systems which can take append lock like HDFS. 
Likun: yes, since we need to overwrite table status file, we need file locking.

> 2. Compaction, delete segment cannot be done on online segments till it is
> converted to the transactional segment.
Likun: Compaction and other data management work will still be done by 
CarbonSession application in standard spark cluster.

> 3. SDK writer should be responsible to add complete carbondata file to
> online segment once the writing is done, it should not add any half cooked
> data.
Likun: yes, in the design doc, I have mentioned this

> 
> And also as we are trying to updating the tablestatus from other modules
> like SDK , we better consider the segment interface first. Please go through
> the jira
> https://issues.apache.org/jira/projects/CARBONDATA/issues/CARBONDATA-2827
> 
> 
> Regards,
> Ravindra
> 
> 
> 
> 
> --
> Sent from: 
> http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
> 

Reply via email to