[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r453110112 ## File path: standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java ## @@ -1980,6 +1989,7 @@ public boolean listPartitionsSpecByExpr(String catName, String dbName, String tb if (maxParts >= 0) { req.setMaxParts(maxParts); } +req.setValidWriteIdList(validWriteIdList); Review comment: I have added comments in the getValidWriteIdList method in HMS Client. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org
[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r453104847 ## File path: ql/src/test/org/apache/hadoop/hive/ql/parse/TestUpdateDeleteSemanticAnalyzer.java ## @@ -232,6 +235,8 @@ public void setup() throws Exception { conf.setVar(HiveConf.ConfVars.HIVEMAPREDMODE, "nonstrict"); conf.setVar(HiveConf.ConfVars.HIVE_TXN_MANAGER, "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager"); conf.setBoolVar(HiveConf.ConfVars.HIVE_IN_TEST, true); +conf.set(ValidTxnList.VALID_TXNS_KEY, Review comment: This test doesn't go through Driver, but invokes SemanticAnalyzer directly for a transactional table, so I have to explicitly set the key. Otherwise it fails. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org
[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r453104603 ## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ## @@ -3911,20 +3962,27 @@ public boolean dropPartition(String dbName, String tableName, List parti * @param tbl The table containing the partitions. * @param expr A serialized expression for partition predicates. * @param conf Hive config. - * @param result the resulting list of partitions + * @param partitions the resulting list of partitions * @return whether the resulting list contains partitions which may or may not match the expr */ public boolean getPartitionsByExpr(Table tbl, ExprNodeGenericFuncDesc expr, HiveConf conf, - List result) throws HiveException, TException { -assert result != null; + List partitions) throws HiveException, TException { + +assert partitions != null; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org
[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r453104328 ## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ## @@ -1459,6 +1464,22 @@ public Table getTable(final String dbName, final String tableName, boolean throw return new Table(tTable); } + /** + * Get ValidWriteIdList for the current transaction. + * @param dbName + * @param tableName + * @return + * @throws LockException + */ + private ValidWriteIdList getValidWriteIdList(String dbName, String tableName) throws LockException { +ValidWriteIdList validWriteIdList = null; +long txnId = SessionState.get().getTxnMgr() != null ? SessionState.get().getTxnMgr().getCurrentTxnId() : 0; +if (txnId > 0) { Review comment: Thats taken care in getTableValidWriteIdListWithTxnList() method. It fails, if the valid_txns_key is not set. Thats why I had to explicitly set this conf.set(ValidTxnList.VALID_TXNS_KEY, new ValidReadTxnList(new long[0], new BitSet(), 1000, Long.MAX_VALUE).writeToString()); in one of the tests, which doesn't go through Driver flow, but invokes SemanticAnalyzer directly for a transactional table. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org
[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r453103330 ## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ## @@ -1459,6 +1464,22 @@ public Table getTable(final String dbName, final String tableName, boolean throw return new Table(tTable); } + /** + * Get ValidWriteIdList for the current transaction. Review comment: Good idea, added additional details. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org
[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r453103050 ## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ## @@ -3580,7 +3596,17 @@ public boolean dropPartition(String dbName, String tableName, List parti List pvals = MetaStoreUtils.getPvals(t.getPartCols(), partSpec); try { - names = getMSC().listPartitionNames(dbName, tblName, pvals, max); + GetPartitionNamesPsRequest req = new GetPartitionNamesPsRequest(); Review comment: I just scanned for get_partition methods that are used in common flows. Will pick up remaining ones in separate patches. This is just to avoid dealing with hundreds of test failures which just delays the entire patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org
[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r451334032 ## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ## @@ -3284,6 +3299,8 @@ public Partition getPartition(Table tbl, Map partSpec, } org.apache.hadoop.hive.metastore.api.Partition tpart = null; try { + // TODO: Either create a new getPartitionWithAuthInfo API with request/response format that takes + // ValidWriteIdList and tableId or if this API is no longer required, remove all the references. Review comment: Just removed this TODO for now, as we can deal with that later. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org
[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r451058731 ## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ## @@ -3580,7 +3592,19 @@ public boolean dropPartition(String dbName, String tableName, List parti List pvals = MetaStoreUtils.getPvals(t.getPartCols(), partSpec); try { - names = getMSC().listPartitionNames(dbName, tblName, pvals, max); + if (AcidUtils.isTransactionalTable(t)) { +GetPartitionNamesPsRequest req = new GetPartitionNamesPsRequest(); +req.setTblName(tblName); +req.setDbName(dbName); +req.setPartValues(pvals); +req.setMaxParts(max); +ValidWriteIdList validWriteIdList = getValidWriteIdList(dbName, tblName); +req.setValidWriteIdList(validWriteIdList != null ? validWriteIdList.toString() : null); +GetPartitionNamesPsResponse res = getMSC().listPartitionNamesRequest(req); +names = res.getNames(); + } else { +names = getMSC().listPartitionNames(dbName, tblName, pvals, max); Review comment: Makes sense, done. ## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ## @@ -3602,9 +3626,27 @@ public boolean dropPartition(String dbName, String tableName, List parti exprBytes = SerializationUtilities.serializeExpressionToKryo(expr); } try { + String defaultPartitionName = HiveConf.getVar(conf, ConfVars.DEFAULTPARTITIONNAME); - names = getMSC().listPartitionNames(tbl.getCatalogName(), tbl.getDbName(), - tbl.getTableName(), defaultPartitionName, exprBytes, order, maxParts); + + if (AcidUtils.isTransactionalTable(tbl)) { +PartitionsByExprRequest req = +new PartitionsByExprRequest(tbl.getDbName(), tbl.getTableName(), ByteBuffer.wrap(exprBytes)); +req.setDefaultPartitionName(defaultPartitionName); +if (maxParts >= 0) { + req.setMaxParts(maxParts); +} +req.setOrder(order); +req.setCatName(tbl.getCatalogName()); +ValidWriteIdList validWriteIdList = getValidWriteIdList(tbl.getDbName(), tbl.getTableName()); +req.setValidWriteIdList(validWriteIdList != null ? validWriteIdList.toString() : null); +names = getMSC().listPartitionNames(req); + + } else { +names = getMSC() +.listPartitionNames(tbl.getCatalogName(), tbl.getDbName(), tbl.getTableName(), defaultPartitionName, Review comment: Makes sense, done. ## File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ## @@ -3917,12 +3972,26 @@ public boolean dropPartition(String dbName, String tableName, List parti public boolean getPartitionsByExpr(Table tbl, ExprNodeGenericFuncDesc expr, HiveConf conf, List result) throws HiveException, TException { assert result != null; +boolean hasUnknownParts; byte[] exprBytes = SerializationUtilities.serializeExpressionToKryo(expr); String defaultPartitionName = HiveConf.getVar(conf, ConfVars.DEFAULTPARTITIONNAME); -List msParts = -new ArrayList<>(); -boolean hasUnknownParts = getMSC().listPartitionsSpecByExpr(tbl.getDbName(), -tbl.getTableName(), exprBytes, defaultPartitionName, (short)-1, msParts); +List msParts = new ArrayList<>(); + +if (AcidUtils.isTransactionalTable(tbl)) { + PartitionsByExprRequest req = new PartitionsByExprRequest(); + req.setDbName(tbl.getDbName()); + req.setTblName((tbl.getTableName())); + req.setDefaultPartitionName(defaultPartitionName); + req.setMaxParts((short) -1); + req.setExpr(exprBytes); + ValidWriteIdList validWriteIdList = getValidWriteIdList(tbl.getDbName(), tbl.getTableName()); + req.setValidWriteIdList(validWriteIdList != null ? validWriteIdList.toString() : null); + hasUnknownParts = getMSC().listPartitionsSpecByExpr(req, msParts); +} else { + hasUnknownParts = getMSC() + .listPartitionsSpecByExpr(tbl.getDbName(), tbl.getTableName(), exprBytes, defaultPartitionName, (short) -1, Review comment: Makes sense, done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org
[GitHub] [hive] kishendas commented on a change in pull request #1217: Hive 23767 : Pass ValidWriteIdList in get_* HMS API requests from HMS Client
kishendas commented on a change in pull request #1217: URL: https://github.com/apache/hive/pull/1217#discussion_r451057776 ## File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/CreateTableOperation.java ## @@ -62,7 +62,8 @@ public int execute() throws HiveException { if (desc.getReplicationSpec().isInReplicationScope()) { // If in replication scope, we should check if the object we're looking at exists, and if so, // trigger replace-mode semantics. - Table existingTable = context.getDb().getTable(tbl.getDbName(), tbl.getTableName(), false); + Table existingTable = context.getDb().getTable(tbl.getDbName(), + tbl.getTableName(), false, true); Review comment: Fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org For additional commands, e-mail: gitbox-h...@hive.apache.org