pvary commented on a change in pull request #2243:
URL: https://github.com/apache/hive/pull/2243#discussion_r633483880
##########
File path:
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergSerDe.java
##########
@@ -138,6 +148,17 @@ public void initialize(@Nullable Configuration
configuration, Properties serDePr
}
}
+ private void createTableForCTAS(Configuration configuration, Properties
serDeProperties) {
+ serDeProperties.setProperty(TableProperties.ENGINE_HIVE_ENABLED, "true");
+ serDeProperties.setProperty(InputFormatConfig.TABLE_SCHEMA,
SchemaParser.toJson(tableSchema));
+ Catalogs.createTable(configuration, serDeProperties);
+ // set these in the global conf so that we can rollback the table in the
lifecycle hook in case of failures
Review comment:
A good candidate to put something into `QueryInfo`, or somewhere else
##########
File path:
iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergStorageHandlerWithEngine.java
##########
@@ -514,6 +519,75 @@ public void testInsertOverwritePartitionedTable() throws
IOException {
HiveIcebergTestUtils.validateData(table, expected, 0);
}
+ @Test
+ public void testCTASFromHiveTable() {
+ Assume.assumeTrue("CTAS target table is supported only for HiveCatalog
tables",
+ testTableType == TestTables.TestTableType.HIVE_CATALOG);
Review comment:
Why? What is blocking us?
##########
File path:
iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveIcebergStorageHandlerWithEngine.java
##########
@@ -514,6 +519,75 @@ public void testInsertOverwritePartitionedTable() throws
IOException {
HiveIcebergTestUtils.validateData(table, expected, 0);
}
+ @Test
+ public void testCTASFromHiveTable() {
+ Assume.assumeTrue("CTAS target table is supported only for HiveCatalog
tables",
+ testTableType == TestTables.TestTableType.HIVE_CATALOG);
+
+ shell.executeStatement("CREATE TABLE source (id bigint, name string)
PARTITIONED BY (dept string) STORED AS ORC");
+ shell.executeStatement("INSERT INTO source VALUES (1, 'Mike', 'HR'), (2,
'Linda', 'Finance')");
+
+ shell.executeStatement(String.format(
+ "CREATE TABLE target STORED BY '%s' %s TBLPROPERTIES ('%s'='%s') AS
SELECT * FROM source",
+ HiveIcebergStorageHandler.class.getName(),
+ testTables.locationForCreateTableSQL(TableIdentifier.of("default",
"target")),
+ TableProperties.DEFAULT_FILE_FORMAT, fileFormat));
+
+ List<Object[]> objects = shell.executeStatement("SELECT * FROM target
ORDER BY id");
+ Assert.assertEquals(2, objects.size());
+ Assert.assertArrayEquals(new Object[]{1L, "Mike", "HR"}, objects.get(0));
+ Assert.assertArrayEquals(new Object[]{2L, "Linda", "Finance"},
objects.get(1));
+ }
+
+ @Test
+ public void testCTASFromDifferentIcebergCatalog() {
Review comment:
Would it be better placed here
`TestHiveIcebergStorageHandlerWithMultipleCatalogs`?
##########
File path:
iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveShell.java
##########
@@ -216,6 +216,9 @@ private HiveConf initializeConf() {
// enables vectorization on Tez
hiveConf.set("tez.mrreader.config.update.properties",
"hive.io.file.readcolumn.names,hive.io.file.readcolumn.ids");
+ // set lifecycle hooks
+ hiveConf.setVar(HiveConf.ConfVars.HIVE_QUERY_LIFETIME_HOOKS,
HiveIcebergCTASHook.class.getName());
Review comment:
I have seen several hooks already. Maybe we would like to keep them as a
single class?
##########
File path:
iceberg/iceberg-handler/src/test/java/org/apache/iceberg/mr/hive/TestHiveShell.java
##########
@@ -216,6 +216,9 @@ private HiveConf initializeConf() {
// enables vectorization on Tez
hiveConf.set("tez.mrreader.config.update.properties",
"hive.io.file.readcolumn.names,hive.io.file.readcolumn.ids");
+ // set lifecycle hooks
+ hiveConf.setVar(HiveConf.ConfVars.HIVE_QUERY_LIFETIME_HOOKS,
HiveIcebergCTASHook.class.getName());
Review comment:
I have seen another Iceberg hook already. Maybe we would like to keep
them as a single class?
Maybe even if they are implementing different interfaces? - Just an idea
which I am playing around
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java
##########
@@ -361,7 +361,7 @@ private void collectCommitInformation(TezWork work) throws
IOException, TezExcep
.filter(name ->
name.endsWith("HiveIcebergNoJobCommitter")).isPresent();
// we should only consider jobs with Iceberg output committer and a data
sink
if (hasIcebergCommitter && !vertex.getDataSinks().isEmpty()) {
- String tableLocationRoot = jobConf.get("location");
+ String tableLocationRoot = jobConf.get("iceberg.mr.table.location");
Review comment:
What is this change?
Shall we use a constant?
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
##########
@@ -7759,6 +7759,18 @@ protected Operator genFileSinkPlan(String dest, QB qb,
Operator input)
.getMsg(destinationPath.toUri().toString()));
}
}
+ // handle direct insert CTAS case
+ // for direct insert CTAS, the table creation DDL is not added to the
task plan in TaskCompiler,
+ // therefore we need to add the InsertHook here manually so that
HiveMetaHook#commitInsertTable is called
Review comment:
Where do we add the InsertHook here? I do not really grok the comment.
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java
##########
@@ -134,6 +135,16 @@ public void compile(final ParseContext pCtx,
boolean isCStats = pCtx.getQueryProperties().isAnalyzeRewrite();
int outerQueryLimit = pCtx.getQueryProperties().getOuterQueryLimit();
+ boolean directInsertCtas = false;
+ if (pCtx.getCreateTable() != null &&
pCtx.getCreateTable().getStorageHandler() != null) {
+ try {
+ directInsertCtas =
+ HiveUtils.getStorageHandler(conf,
pCtx.getCreateTable().getStorageHandler()).directInsertCTAS();
+ } catch (HiveException e) {
Review comment:
When do we get `HiveException`?
##########
File path: ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
##########
@@ -7759,6 +7759,18 @@ protected Operator genFileSinkPlan(String dest, QB qb,
Operator input)
.getMsg(destinationPath.toUri().toString()));
}
}
+ // handle direct insert CTAS case
+ // for direct insert CTAS, the table creation DDL is not added to the
task plan in TaskCompiler,
+ // therefore we need to add the InsertHook here manually so that
HiveMetaHook#commitInsertTable is called
+ if (qb.isCTAS() && tableDesc != null && tableDesc.getStorageHandler() !=
null) {
+ try {
+ if (HiveUtils.getStorageHandler(conf,
tableDesc.getStorageHandler()).directInsertCTAS()) {
+ createPreInsertDesc(destinationTable, false);
+ }
+ } catch (HiveException e) {
Review comment:
When do we get `HiveException`? Why swallow it?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]