Neer393 commented on code in PR #6379:
URL: https://github.com/apache/hive/pull/6379#discussion_r3069743840
##########
ql/src/java/org/apache/hadoop/hive/llap/LlapHiveUtils.java:
##########
@@ -74,18 +75,21 @@ public static PartitionDesc partitionDescForPath(Path path,
Map<Path, PartitionD
return part;
}
- public static CacheTag getDbAndTableNameForMetrics(Path path, boolean
includeParts,
- PartitionDesc part) {
-
- // Fallback to legacy cache tag creation logic.
+ /**
+ * Builds a {@link CacheTag} for the given path and partition descriptor.
+ * The catalog name is derived from the {@link PartitionDesc} when
available, falling back
+ * to {@link Warehouse#DEFAULT_CATALOG_NAME} when {@code part} is null.
+ */
+ public static CacheTag getCacheTag(Path path, boolean includeParts,
PartitionDesc part) {
if (part == null) {
- return CacheTag.build(LlapUtil.getDbAndTableNameForMetrics(path,
includeParts));
+ return CacheTag.build(
+ Warehouse.DEFAULT_CATALOG_NAME,
LlapUtil.getDbAndTableNameForMetrics(path, includeParts));
Review Comment:
Yes I understand but according to the current usages, all of them pass a
partition.
Coming to your question of non partitioned tables, I tried but apart from
that there is no way of determining the catalog. Could you suggest something ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]