Glad to hear that!
And hope it can help any other guys facing the same problem.
-- Forwarded message -
发件人: Bansal, Jaimita
Date: 2023年2月1日周三 03:15
Subject: RE: [Spark Standalone Mode] How to read from kerberised HDFS in
spark standalone mode
To: Wei Yan
Cc: Chittajallu, Rajiv
this one, but today, compatible schemas must only add/remove
> columns and cannot change types.
>
> You could try creating different dataframes and unionAll them. Coercions
> should be inserted automatically in that case.
>
> On Mon, May 11, 2015 at 3:37 PM, Wei Yan wrote
Thanks for the reply, Michael.
The problem is, if I set "spark.sql.parquet.useDataSourceApi" to true,
spark cannot create a DataFrame. The exception shows it "failed to merge
incompatible schemas". I think here it means that, the "int" schema cannot
be merged with the "long" one.
Does it mean that
Hi, devs,
I met a problem when using spark to read to parquet files with two
different versions of schemas. For example, the first file has one field
with "int" type, while the same field in the second file is a "long". I
thought spark would automatically generate a merged schema "long", and use
t