Tanuj Khurana created PHOENIX-7574:
--------------------------------------
Summary: Phoenix Compaction doesn't correctly handle
DeleteFamilyVersion markers
Key: PHOENIX-7574
URL: https://issues.apache.org/jira/browse/PHOENIX-7574
Project: Phoenix
Issue Type: Bug
Affects Versions: 5.2.1, 5.2.0
Reporter: Tanuj Khurana
Attachments: delete_family_version_data_loss.txt
In Phoenix, the Read repair process on global indexes can insert
DeleteFamilyVersion markers when there is a unverified index row update but no
corresponding data table update. I found that phoenix compaction purges that
row leading to a data loss. I created a test case in TableTTLIT to reproduce
the problem. [^delete_family_version_data_loss.txt]
{code:java}
@Test
public void testDeleteFamilyVersion() throws Exception {
// for the purpose of this test only considering cases when maxlookback is 0
if (tableLevelMaxLooback == null || tableLevelMaxLooback != 0) {
return;
}
if (multiCF == true) {
return;
}
try (Connection conn = DriverManager.getConnection(getUrl())) {
String tableName = "T_" + generateUniqueName();
createTable(tableName);
String indexName = "I_" + generateUniqueName();
String indexDDL = String.format("create index %s on %s (val1) include
(val2, val3)",
indexName, tableName);
conn.createStatement().execute(indexDDL);
updateRow(conn, tableName, "a1");
// make the index row unverified and fail the data table update
IndexRegionObserver.setFailDataTableUpdatesForTesting(true);
try {
updateColumn(conn, tableName, "a1", 2, "col2_xyz");
conn.commit();
fail("An exception should have been thrown");
} catch (Exception ignored) {
// Ignore the exception
} finally {
IndexRegionObserver.setFailDataTableUpdatesForTesting(false);
}
TestUtil.dumpTable(conn, TableName.valueOf(indexName));
// do a read on the index which should trigger a read repair
String dql = "select count(*) from " + tableName;
try (ResultSet rs = conn.createStatement().executeQuery(dql)) {
PhoenixResultSet prs = rs.unwrap(PhoenixResultSet.class);
String explainPlan =
QueryUtil.getExplainPlan(prs.getUnderlyingIterator());
assertTrue(explainPlan.contains(indexName));
while (rs.next()) {
assertEquals(1, rs.getInt(1));
}
}
TestUtil.dumpTable(conn, TableName.valueOf(indexName));
flush(TableName.valueOf(indexName));
majorCompact(TableName.valueOf(indexName));
TestUtil.dumpTable(conn, TableName.valueOf(indexName));
// run the same query again after compaction
try (ResultSet rs = conn.createStatement().executeQuery(dql)) {
PhoenixResultSet prs = rs.unwrap(PhoenixResultSet.class);
String explainPlan =
QueryUtil.getExplainPlan(prs.getUnderlyingIterator());
assertTrue(explainPlan.contains(indexName));
while (rs.next()) {
assertEquals(1, rs.getInt(1));
}
}
}
} {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)