hello mohdshahidkhan,
sorry for the writing mistake and
step3: xx is database and i did val cc =
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://ns1/user")
)
step4:two table data directory like:
/user/xx/prod_inst_cab_backup
/user/xx/prod_inst_cab
-
F
Hi Sunerhan,
If xx is database, the step 3 should be like require a minor change
val cc = SparkSession.builder().config(sc.getConf).
getOrCreateCarbonSession("hdfs://ns1/user/") )
Step 4:
The backup table data should be copied at database location.
For more details, please refer below.
*https:
Hi Michael,
Hope below details will help you.
1. How should I configure carbon to get performance ?
Please refer below link to optimize data loading performance in Carbon.
*https://github.com/apache/carbondata/blob/master/docs/useful-tips-on-carbondata.md#configuration-for-optimizing-data-loading
Hi Michael,
Hope below details will help you.
1. How should I configure carbon to get performance ?
Please refer below link to optimize data loading performance in Carbon.
*https://github.com/apache/carbondata/blob/master/docs/useful-tips-on-carbondata.md#configuration-for-optimizing-data-loading-
Hi Liang,
Many thanks for your answer!
It has worked in this way.
I am wondering now, how should I configure carbon to get performance
comparable with parquet.
Now I am using default properties, actually no properties at all.
I have tried saving one table to carbon, and it took ages comparable to
hello,
I have a table created and loaded under carbon1.3.0 and i'm upgrading
to carbon1.3.1 using refresh table.
following is my step:
1. copy old table's hdfs location to a new direcotry.
/user/xx/prod_inst_cab-->/user/xx/prod_inst_cab_backup
2. hive -e "drop table xx.pro