Hi 

1.Use your current test environment (CarbonData 1.0 + Spark1.6), Please
divide 2 billions data into 4 pieces(each is 0.5 billion), load data again.

2.For CarbonData 1.0 + Spark1.6 with kettle for loading data, please
configure the bellow 3 parameters in carbon.properties(note: please copy the
latest carbon.properties to all nodes)

carbon.graph.rowset.size=10000   (by default is 100000, please set to 1/10
for reducing Rowset size exchanged between data load graph)
carbon.number.of.cores.while.loading=5 (because your machine has 5 cores) 
carbon.sort.size=50000 ( by default is 500000, please set to 1/10 for
reducing temp intermediate files)


Regards
Liang



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/insert-into-carbon-table-failed-tp9609p9688.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.

Reply via email to