Re: [proposal] Parallelize block pruning of default datamap in driver for filter query processing.

2018-11-23 Thread xubo245
+1, Whether will it affect the SDK/CSDK reader after parallelizing block pruning? please check. SDK and CSDK need keep the carbon files sequence/order -- Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/

Re: [proposal] Parallelize block pruning of default datamap in driver for filter query processing.

2018-11-22 Thread Ravindra Pesala
+1, It will be helpful for pruning millions of data files in less time. Please try to generalize for all datamaps. Thanks & Regards Ravindra On Fri, 23 Nov 2018 at 10:24, Ajantha Bhat wrote: > @xuchuanyin > Yes, I will be handling this for all types of datamap pruning in the same > flow when I

Re: [proposal] Parallelize block pruning of default datamap in driver for filter query processing.

2018-11-22 Thread Ajantha Bhat
@xuchuanyin Yes, I will be handling this for all types of datamap pruning in the same flow when I am done with default datamap's implementation and testing. Thanks, Ajantha On Fri, Nov 23, 2018 at 6:36 AM xuchuanyin wrote: > 'Parallelize pruning' is in my plan long time ago, nice to see your

Re: [proposal] Parallelize block pruning of default datamap in driver for filter query processing.

2018-11-22 Thread xuchuanyin
'Parallelize pruning' is in my plan long time ago, nice to see your proposal here. While implementing this, I'd like you to make it common, that is to say not only default datamap but also other index datamaps can also use parallelize pruning. -- Sent from: http://apache-carbondata-dev-mailing

[proposal] Parallelize block pruning of default datamap in driver for filter query processing.

2018-11-20 Thread Ajantha Bhat
Hi all, I want to propose *"Parallelize block pruning of default datamap in driver for filter query processing"* *Background:* We do block pruning for the filter queries at the driver side. In real time big data scenario, we can have millions of carbon files for one carbon table. It is currently o