@Wenchen Fan Got your explanation, thanks!

My understanding is that even if we create Spark tables using Spark's
native data sources, by default, the metadata about these tables will
be stored in the Hive metastore. As a consequence, a Hive upgrade can
potentially affect Spark tables. For example, depending on the
severity of the changes, the Hive metastore schema might change, which
could require Spark code to be updated to handle these changes in how
table metadata is represented. Is this assertion correct?

Thanks

Mich Talebzadeh,

Technologist | Architect | Data Engineer  | Generative AI | FinCrime

London
United Kingdom


   view my Linkedin profile


 https://en.everybodywiki.com/Mich_Talebzadeh



Disclaimer: The information provided is correct to the best of my
knowledge but of course cannot be guaranteed . It is essential to note
that, as with any advice, quote "one test result is worth one-thousand
expert opinions (Werner Von Braun)".

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to