zhangdove commented on issue #1831:
URL: https://github.com/apache/iceberg/issues/1831#issuecomment-734767996
@pvary Thank you for the information you provided. Based on the information
you provided, I made a survey and found the following.
Iceberg table is created using the Hive Catalog. The properties of the table
are normal:
```
| SerDe Library: |
org.apache.iceberg.mr.hive.HiveIcebergSerDe | NULL
|
| InputFormat: | null
| NULL |
| OutputFormat: | null
| NULL |
```
However, when I write data to Iceberg, as follows, the SerDe properties of
the table have been modified.
```scala
df.writeTo(s"hive_prod.db.tb").overwrite(functions.lit(true))
df.writeTo(s"hive_prod.db.tb").overwritePartitions()
df.writeTo(s"hive_prod.db.tb").append()
```
I am still learning this module of Hive, so I am not familiar with it. I
have two ideas that are less than exact:
a) Save the value of `TableProperties.ENGINE_HIVE_ENABLED`. I am not suer
whether the property will be reloaded when iceberg is written to use Spark.
b) Set the `TableProperties.ENGINE_HIVE_ENABLED` property to true when
building the Hive Catalog and add it to conf.
https://github.com/apache/iceberg/blob/master/spark3/src/main/java/org/apache/iceberg/spark/SparkCatalog.java#L114
I will do some tests next, but I hope to get feedback from the everyone if
possible.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]