stoty commented on code in PR #1569:
URL: https://github.com/apache/phoenix/pull/1569#discussion_r1127449923
##########
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java:
##########
@@ -588,6 +588,49 @@ public InternalScanner
preCompact(ObserverContext<RegionCoprocessorEnvironment>
@Override
public InternalScanner run() throws Exception {
InternalScanner internalScanner = scanner;
+ boolean isDisabled = false;
+ if (request.isMajor()) {
+ final String
+ fullTableName =
+
c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString();
+ if
(!PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME.equals(fullTableName)) {
+ try (PhoenixConnection conn =
QueryUtil.getConnectionOnServer(
+
compactionConfig).unwrap(PhoenixConnection.class)) {
+ PTable table =
PhoenixRuntime.getTableNoCache(conn, fullTableName);
+ List<PTable>
Review Comment:
We could also check what coprocessors the data table has, and only apply
this logic if the old indexing is still used.
##########
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java:
##########
@@ -588,6 +588,49 @@ public InternalScanner
preCompact(ObserverContext<RegionCoprocessorEnvironment>
@Override
public InternalScanner run() throws Exception {
InternalScanner internalScanner = scanner;
+ boolean isDisabled = false;
+ if (request.isMajor()) {
+ final String
+ fullTableName =
+
c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString();
+ if
(!PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME.equals(fullTableName)) {
+ try (PhoenixConnection conn =
QueryUtil.getConnectionOnServer(
+
compactionConfig).unwrap(PhoenixConnection.class)) {
+ PTable table =
PhoenixRuntime.getTableNoCache(conn, fullTableName);
+ List<PTable>
+ indexes =
+
PTableType.INDEX.equals(table.getType()) ?
+ Lists.newArrayList(table) :
+ table.getIndexes();
+ // FIXME need to handle views and indexes on
views as well
+ for (PTable index : indexes) {
+ if (index.getIndexDisableTimestamp() != 0)
{
+ LOGGER.info("Modifying major
compaction scanner to retain "
+ + "deleted cells for a table
with disabled index: "
+ + fullTableName);
+ isDisabled = true;
+ break;
+ }
+ }
+ } catch (Exception e) {
+ if (e instanceof TableNotFoundException) {
+ LOGGER.debug(
+ "Ignoring HBase table that is not
a Phoenix table: "
+ + fullTableName);
+ // non-Phoenix HBase tables won't be
found, do nothing
+ } else {
+ LOGGER.error(
+ "Unable to modify compaction
scanner to retain deleted "
+ + "cells for a table with
disabled Index; "
+ + fullTableName, e);
+ }
+ }
+ }
+ }
+ if (!isDisabled) {
+ internalScanner = new
StoreCompactionScanner(c.getEnvironment(), store, scanner,
Review Comment:
IIUC this disables the new scanner for this case, and uses the default one.
TBH I haven't analyzed the new logic in detail, but wouldn't that defeat the
original purpose of this patch for this case ?
Also the old code sets the keepDeletedCells and TTL, and this one doesn't.
If we use the default scanner, what causes deletes to be kept ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]