2020 at 6:06 PM Liang Chen wrote:
>
> > OK, thank you feedbacked this issue, let us look into it.
> >
> > Regards
> > Liang
> >
> >
> > Manhua Jiang wrote
> > > Hi All,
> > > Recently, I found carbon over-use cluster resources. Generally
> > Ajantha
> >
> > On Tue, Apr 14, 2020 at 6:06 PM Liang Chen
> > wrote:
> >
> > > OK, thank you feedbacked this issue, let us look into it.
> > >
> > > Regards
> > > Liang
> > >
> > >
> > > Manhua Jian
Hi All,
Recently, I found carbon over-use cluster resources. Generally the design of
carbon work flow does not act as common spark task which only do one small work
in one thread, but the task has its mind/logic.
For example,
1.launch carbon with --num-executors=1 but set
carbon.number.of.cores
Congratulations Kunal !
Regards,
Manhua
On 2020/03/30 09:31:33, Indhumathi wrote:
> Congratulations Kunal!
>
> Regards,
> Indhumathi
>
>
>
> --
> Sent from:
> http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
>
+1
Issues tab is easier to reach than JIRA too
On 2019/12/19 03:06:58, "恩爸" <441586...@qq.com> wrote:
> Hi community:
> I suggest community to open 'Issues' tab in carbondata github page, we can
> use this feature to collect the information of carbondata users, like this:
> https://github.com
Hi Jacky,
If we create bloom filter in blocklet level, maybe too similar to bloom
datamap and have to face the same problems bloom datamap facing, except the
pruning is running in executor side.
Page level is preferred since page size is KNOWN and this let us get rid of
considering how many
Hi Community,
Bloom datamap has been implemented for a while at blocklet level.
One problem of bloom datamap is that the pruning process is
done in driver side and caching the bloom index data is expensive.
So here we are proposing to build bloom filter inside the carbon
data file at page lev