Re: [Improvement] Carbon query gc problem

2016-12-20 Thread Kumar Vishal
be used to store runtime temp > > data. > > > > > > > > -- > > View this message in context: http://apache-carbondata- > > mailing-list-archive.1130556.n5.nabble.com/Improvement- > > Carbon-query-gc-problem-tp4322p4718.html > > Sent from the Apache CarbonData Mailing List archive mailing list archive > > at Nabble.com. > > >

Re: [Improvement] Carbon query gc problem

2016-12-19 Thread An Lan
com>: > +1 Heap should not store data ,it should be used to store runtime temp > data. > > > > -- > View this message in context: http://apache-carbondata- > mailing-list-archive.1130556.n5.nabble.com/Improvement- > Carbon-query-gc-problem-tp4322p4718.html > Sen

Re: [Improvement] Carbon query gc problem

2016-12-19 Thread ZhuWilliam
+1 Heap should not store data ,it should be used to store runtime temp data. -- View this message in context: http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Improvement-Carbon-query-gc-problem-tp4322p4718.html Sent from the Apache CarbonData Mailing List archive mailing

Re: [Improvement] Carbon query gc problem

2016-12-19 Thread Liang Chen
in heap we can store this data in offheap andwill clear > when scanning is finished for that query.Please vote and comment for above > proposal.-RegardsKUmar Vishal -- View this message in context: http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Improvement-Carbon-query-

Re: [Improvement] Carbon query gc problem

2016-12-13 Thread Raghunandan S
+1 Good idea to avoid gc overhead.we need to be careful in clearing memory after use On Tue, 13 Dec 2016 at 2:17 PM, Kumar Vishal wrote: > There are lots of gc when carbon is processing more number of records > during query, which is impacting carbon query

[Improvement] Carbon query gc problem

2016-12-13 Thread Kumar Vishal
There are lots of gc when carbon is processing more number of records during query, which is impacting carbon query performance.To solve this gc problem happening when query output is too huge or when more number of records are processed, I would like to propose below solution. Currently we are