kevinrr888 commented on code in PR #5881:
URL: https://github.com/apache/accumulo/pull/5881#discussion_r2410830181
##########
test/src/main/java/org/apache/accumulo/test/functional/FunctionalTestUtils.java:
##########
@@ -89,6 +92,20 @@ public static int countRFiles(AccumuloClient c, String
tableName) throws Excepti
}
}
+ public static List<String> getRFilePaths(ServerContext context, String
tableName) {
+ return getStoredTabletFiles(context,
tableName).stream().map(StoredTabletFile::getMetadataPath)
+ .collect(Collectors.toList());
+ }
Review Comment:
This appears unused so can remove
##########
test/src/main/java/org/apache/accumulo/test/ComprehensiveTableOperationsIT.java:
##########
@@ -373,7 +374,63 @@ public void test_merge() throws Exception {
@Test
public void test_compact() throws Exception {
- // TODO see issue#5679
+ // compact for user tables is tested in various ITs. One example is
CompactionIT. Ensure test
+ // exists
+ assertDoesNotThrow(() -> Class.forName(CompactionIT.class.getName()));
+ // disable the GC to prevent automatic compactions on METADATA and ROOT
tables
+
getCluster().getClusterControl().stopAllServers(ServerType.GARBAGE_COLLECTOR);
+ // set num compactors to 2 to ensure we can compact the system tables
while having a slow Fate
+ // operation
+ getCluster().getClusterControl().stopAllServers(ServerType.COMPACTOR);
+
getCluster().getConfig().getClusterServerConfiguration().setNumDefaultCompactors(2);
+ getCluster().getClusterControl().startAllServers(ServerType.COMPACTOR);
+ try {
+ userTable = getUniqueNames(1)[0];
+ ops.create(userTable);
+
+ // create some RFiles for the METADATA and ROOT tables by creating some
data in the user
+ // table, flushing that table, then the METADATA table, then the ROOT
table
+ for (int i = 0; i < 3; i++) {
+ try (var bw = client.createBatchWriter(userTable)) {
+ var mut = new Mutation("r" + i);
+ mut.put("cf", "cq", "v");
+ bw.addMutation(mut);
+ }
+ ops.flush(userTable, null, null, true);
+ ops.flush(SystemTables.METADATA.tableName(), null, null, true);
+ ops.flush(SystemTables.ROOT.tableName(), null, null, true);
+ }
+
+ // Create a file for the scan ref and Fate tables
+ createScanRefTableRow();
+ ops.flush(SystemTables.SCAN_REF.tableName(), null, null, true);
+ createFateTableRow(userTable);
+ ops.flush(SystemTables.FATE.tableName(), null, null, true);
+
+ for (var sysTable : SystemTables.tableNames()) {
+ Set<StoredTabletFile> stfsBeforeCompact =
+ getStoredTabletFiles(getCluster().getServerContext(), sysTable);
+
+ log.info("Compacting {} with files: {}", sysTable, stfsBeforeCompact);
+ ops.compact(sysTable, null, null, true, true);
+ log.info("Completed compaction for " + sysTable);
+
+ // RFiles resulting from a compaction begin with 'A'. Wait until we
see an RFile beginning
+ // with 'A' that was not present before the compaction.
+ Wait.waitFor(() -> {
+ var stfsAfterCompact =
getStoredTabletFiles(getCluster().getServerContext(), sysTable);
+ log.info("Completed compaction for {} with new files {}", sysTable,
stfsAfterCompact);
+ String regex = "^A.*\\.rf$";
+ var aStfsBeforeCompaction = stfsBeforeCompact.stream()
+ .filter(stf ->
stf.getFileName().matches(regex)).collect(Collectors.toSet());
+ var aStfsAfterCompaction = stfsAfterCompact.stream()
+ .filter(stf ->
stf.getFileName().matches(regex)).collect(Collectors.toSet());
+ return !Sets.difference(aStfsAfterCompaction,
aStfsBeforeCompaction).isEmpty();
+ });
+ }
+ } finally {
+
getCluster().getClusterControl().startAllServers(ServerType.GARBAGE_COLLECTOR);
+ }
Review Comment:
```suggestion
} finally {
getCluster().getClusterControl().startAllServers(ServerType.GARBAGE_COLLECTOR);
getCluster().getClusterControl().stopAllServers(ServerType.COMPACTOR);
getCluster().getConfig().getClusterServerConfiguration().setNumDefaultCompactors(1);
getCluster().getClusterControl().startAllServers(ServerType.COMPACTOR);
}
```
Could restore original state before test
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]