A view is essentially a SQL query. It's fragile to share views between
Spark and Hive because different systems have different SQL dialects. They
may interpret the view SQL query differently and introduce unexpected
behaviors.

In this case, Spark returns decimal type for gender * 0.3 - 0.1 but Hive
returns double type. The view schema was determined during creation by
Hive, which does not match the view SQL query when we use Spark to read the
view. We need to re-create this view using Spark. Actually I think we need
to do the same for every Hive view if we need to use it in Spark.

On Wed, May 18, 2022 at 7:03 PM beliefer <belie...@163.com> wrote:

> During the migration from hive to spark, there was a problem with the SQL
> used to create views in hive. The problem is that the SQL that legally
> creates a view in hive will make an error when executed in spark SQL.
>
> The SQL is as follows:
>
> CREATE VIEW test_db.my_view AS
> select
> case
> when age > 12 then gender * 0.3 - 0.1
> end AS TT,
> gender,
> age,
> careers,
> education
> from
> test_db.my_table;
>
> The error message is as follows:
>
> Cannot up cast TT from decimal(13, 1) to double.
> The type path of the target object is:
>
> You can either add an explicit cast to the input data or choose a higher
> precision type of the field in the target object
>
> *How should we solve this problem?*
>
>
>
>

Reply via email to