dev
spark 2.1.1 carbondate 1.1.1
scala> cc.sql("select count(*) from public.prod_offer_inst_cab").show;
18/04/12 11:47:17 AUDIT CarbonMetaStoreFactory: [hdd340][ip_crm][Thread-1]File
based carbon metastore is enabled
java.lang.NullPointerException
at
org.apache.carbondata.core.mutate.Carbo
Hi,
After I load from hive table which is parquet format insert into carbondata
table following error :
when execute simple query through spark sql thrift show following error :
java.lang.RuntimeException: java.util.concurrent.ExecutionException:
java.util.concurrent.ExecutionException: java.io
Hi community ,
I am using CarbonData1.3 + Spark2.1, I find a potential bottleneck when
using Carbondata. As
I know, CarbonData loads all of the carbonindex files and turn these files
to DataMap or SegmentIndex (for early version)which contains startkey
,endkey,min/max value of each column. If I h