DomGarguilo commented on code in PR #2928:
URL: https://github.com/apache/accumulo/pull/2928#discussion_r969980045


##########
test/src/main/java/org/apache/accumulo/test/LargeSplitRowIT.java:
##########
@@ -130,38 +129,35 @@ public void automaticSplitWith250Same() throws Exception {
     // make a table and lower the configure properties
     final String tableName = getUniqueNames(1)[0];
     try (AccumuloClient client = 
Accumulo.newClient().from(getClientProperties()).build()) {
-      client.tableOperations().create(tableName);
-      client.tableOperations().setProperty(tableName, 
Property.TABLE_SPLIT_THRESHOLD.getKey(),
-          "10K");
-      client.tableOperations().setProperty(tableName, 
Property.TABLE_FILE_COMPRESSION_TYPE.getKey(),
-          "none");
-      client.tableOperations().setProperty(tableName,
-          Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE.getKey(), "64");
-      client.tableOperations().setProperty(tableName, 
Property.TABLE_MAX_END_ROW_SIZE.getKey(),
-          "1000");
-
-      // Create a BatchWriter and key for a table entry that is longer than 
the allowed size for an
+      Map<String,String> props = new HashMap<>();

Review Comment:
   Yea it would work here too. I used hashmap.put here so that each property 
would be on its own line to make things more readable where as the spot that I 
used `Map.of()` above, it was only a single property. I could use `Map.of()` 
with these and do custom formatting to make things more readable. I like how 
concise `Map.of()` is but it does seem to make things less readable when there 
are multiple keys and values that get pushed to different lines.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to