TangYan-1 commented on issue #2927:
URL: https://github.com/apache/iceberg/issues/2927#issuecomment-893288369
@pvary Thanks for your guide. I'm now trying to use iceberg with hadoop
catalog. I created a iceberg table using spark job, and create an overlay in
hive. When I use presto/trino to query that table, it show the below error. Do
you have any ideas?
1. spark3-shell --jars iceberg-spark3-runtime-0.11.1.jar \
--conf
spark.sql.catalog.hadoop_catalog=org.apache.iceberg.spark.SparkCatalog \
--conf spark.sql.catalog.hadoop_catalog.type=hadoop \
--conf spark.sql.catalog.hadoop_catalog.warehouse=/iceberg/warehouse
spark.sql("CREATE TABLE hadoop_catalog.testdb.iceberg_table_in_hdfs (report
string ,reporttime string,reporttype string, reportid string , reportversion
bigint, day string ) USING iceberg PARTITIONED BY (day)")
2. beeline> CREATE EXTERNAL TABLE testdb.iceberg_table_in_hdfs (report
string ,reporttime string,reporttype string, reportid string , reportversion
bigint, day string )
STORED BY 'org.apache.iceberg.mr.hive.HiveIcebergStorageHandler'
LOCATION '/iceberg/warehouse/testdb/iceberg_table_in_hdfs'
TBLPROPERTIES (
'iceberg.mr.catalog'='hadoop',
'iceberg.mr.catalog.hadoop.warehouse.location'='/iceberg/warehouse');
3. presto:testdb> select * from iceberg_table_in_hdfs;
Query 20210805_085251_00012_vs7zn failed: Table is missing
[metadata_location] property: testdb.iceberg_table_in_hdfs
io.prestosql.spi.PrestoException: Table is missing [metadata_location]
property: testdb.iceberg_table_in_hdfs
at
io.prestosql.plugin.iceberg.HiveTableOperations.refresh(HiveTableOperations.java:151)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]