Hi Jacky:
Yes, my purpose is what you said.
If there is one big segments including big delete and update delta files,
it needs to find another segment to compact with that big one if users want
to eliminate delta files, this operation will be more time consuming than
the operation what i want.
I guess your intention is to rewrite a single segment by merging base file and
delta files to improve the query performance of that segment, right? I think
this is doable and note that this operation may be time consuming since it is
rewriting the whole segment.
Regards,
Jacky
> 在 2019年3月30日,上
Hi Jocean,
The timestamp format which you specify by doing this:
CarbonProperties.getInstance()
.addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "/MM/dd
HH:mm:ss") is only for taking the input in this format. The output
displayed when you query will always be in the default forma
Hi, Akash R
If MV datamap supports spark 2.1, we must modify the mv internal code.
But mv datamap framework is independent, it's decoupled with spark 2.1
if we decide to remove 2.1 , mv will still be work with spark-2.2 or above
without modify
--
Sent from:
http://apache-carbondata-dev-mai