We are loading parquet data as temp tables but wondering if there is a way
to add a partition to the data without going through hive (we still want to
use spark's parquet serde as compared to hive). The data looks like ->

/date1/file1, /date1/file2 ... , /date2/file1,
/date2/file2,..../daten/filem

and we are loading it like:
val parquetFileRDD = sqlContext.parquetFile(comma separated parquet file
names)

but it would be nice to able to add a partition and provide date in the
query parameter.

Reply via email to