Hi Akash,
It is good to have this feature and expecting to get good understanding of
the design and solution from the design document.
Please clarify the below points.
(1) How are we planning support lazy loads? Is there any one-to-one mapping
between the segments of the main table and the datam
Hi Akash, please note that if the index datamap supports lazy build, then there
could be a chance that only some segments that have corresponding index data
while some do not, which means that carbondata should handle this situation
during query.
We cannot just discard the existing index otherw
Hi,
Currently in carbondata we have datamaps like preaggregate, lucene, bloom,
mv and we have
lazy and non-lazy methods to load data to datamaps. But lazy load is not
allowed for datamaps
like preagg, lucene, bloom.but, it is allowed for mv datamap. In lazy load
of mv datamap, for
every rebuild(lo
Hi litao,
sparkSql function calls withprofiler function method
and whenever the queryExecution object and SQLStart is made, this will call
the generateDF function, which creates the new DataSet object.
So once the queryExecution object is made from logical plan, we call
assertAnalyzed() which exe
nice feature. I still have some questions:
1. what's the impact on set carbon.input.segments command? Index Cache
Server may make the query slower.
2. what's your plan of this feature.
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/