Neer393 commented on code in PR #6379:
URL: https://github.com/apache/hive/pull/6379#discussion_r3198950752
##########
ql/src/java/org/apache/hadoop/hive/llap/LlapHiveUtils.java:
##########
@@ -74,18 +75,21 @@ public static PartitionDesc partitionDescForPath(Path path,
Map<Path, PartitionD
return part;
}
- public static CacheTag getDbAndTableNameForMetrics(Path path, boolean
includeParts,
- PartitionDesc part) {
-
- // Fallback to legacy cache tag creation logic.
+ /**
+ * Builds a {@link CacheTag} for the given path and partition descriptor.
+ * The catalog name is derived from the {@link PartitionDesc} when
available, falling back
+ * to {@link Warehouse#DEFAULT_CATALOG_NAME} when {@code part} is null.
+ */
+ public static CacheTag getCacheTag(Path path, boolean includeParts,
PartitionDesc part) {
if (part == null) {
- return CacheTag.build(LlapUtil.getDbAndTableNameForMetrics(path,
includeParts));
+ return CacheTag.build(
+ Warehouse.DEFAULT_CATALOG_NAME,
LlapUtil.getDbAndTableNameForMetrics(path, includeParts));
Review Comment:
No I don't think so. The implementation of
`LlapUtil::getDbAndTableNameForMetrics` tries to identify the db and table name
from the path in storage i.e. if the path is
`hdfs://namenode:8020/warehouse/tablespace/external/db1.db/tb1` so it finds the
`.db` and then thinks that db1 is the db name and the very next thing to the
`/` is the table name.
I did not like this way of determining the db and table name but to answer
your question, it still does not solve our problem i.e. we can't find the
catalog name from path as AFAIK, we don't store the catalog name on the path
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]