Yes, Spark has a shim layer to support all Hive versions. It shouldn't be
an issue as many users create native Spark data source tables already
today, by explicitly putting the `USING` clause in the CREATE TABLE
statement.

On Wed, May 1, 2024 at 12:56 AM Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> @Wenchen Fan Got your explanation, thanks!
>
> My understanding is that even if we create Spark tables using Spark's
> native data sources, by default, the metadata about these tables will
> be stored in the Hive metastore. As a consequence, a Hive upgrade can
> potentially affect Spark tables. For example, depending on the
> severity of the changes, the Hive metastore schema might change, which
> could require Spark code to be updated to handle these changes in how
> table metadata is represented. Is this assertion correct?
>
> Thanks
>
> Mich Talebzadeh,
>
> Technologist | Architect | Data Engineer  | Generative AI | FinCrime
>
> London
> United Kingdom
>
>
>    view my Linkedin profile
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> Disclaimer: The information provided is correct to the best of my
> knowledge but of course cannot be guaranteed . It is essential to note
> that, as with any advice, quote "one test result is worth one-thousand
> expert opinions (Werner Von Braun)".
>

Reply via email to