Hi xuchuanyin, thanks for your reply.

The syntax for DataLoad using Json is to support the Load DDL with .json
files.
Example:
LOAD DATA INPATH 'data.json' into table 'tablename';

As per your suggestion, if we read the input files(.json) using spark
dataframe, then we cannot handle bad records.
I tried loading a json file with has a bad record in one column using
dataframe and that dataframe returned null values for all the columns. 
So, carbon does not know which column actually contains a bad record while
loading. Hence, this case cannot be handled through data frame. 





--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/

Reply via email to