Re: Spark scala/Hive scenario

2019-08-07 Thread Jörn Franke
You can use the map datatype on the Hive table for the columns that are uncertain: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-ComplexTypes However, maybe you can share more concrete details, because there could be also other solutions. > Am 07.08.2

Spark scala/Hive scenario

2019-08-07 Thread anbutech
Hi All, I have a scenario in (Spark scala/Hive): Day 1: i have a file with 5 columns which needs to be processed and loaded into hive tables. day2: Next day the same feeds(file) has 8 columns(additional fields) which needs to be processed and loaded into hive tables How do we approach this pro