Hi,all:
    I have loaded 1000W records in carbondata format with carbon-spark
plugin. I have several questions below:

Q1: I see there are only one  XXX.carbondata file and  92 blockets into this
block file.  How can I split these blocklets into several blocks when
generating this file?  Is there any config properties?

Q2: There are always Segment_0 ,Part0 in a table.  How can i optimize the
concurrency process for reading? Is there any guidelines? 




   



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Logic-about-file-storage-tp6458.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.

Reply via email to