You can use the map datatype on the Hive table for the columns that are
uncertain:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-ComplexTypes
However, maybe you can share more concrete details, because there could be also
other solutions.
> Am 07.08.2
Hi All,
I have a scenario in (Spark scala/Hive):
Day 1:
i have a file with 5 columns which needs to be processed and loaded into
hive tables.
day2:
Next day the same feeds(file) has 8 columns(additional fields) which needs
to be processed and loaded into hive tables
How do we approach this pro