keith-turner commented on code in PR #3980:
URL: https://github.com/apache/accumulo/pull/3980#discussion_r1440993933


##########
server/manager/src/main/java/org/apache/accumulo/manager/tableOps/bulkVer2/PrepBulkImport.java:
##########
@@ -209,7 +210,8 @@ private KeyExtent checkForMerge(final long tid, final 
Manager manager) throws Ex
         try (TabletsMetadata tabletsMetadata =
             
TabletsMetadata.builder(manager.getContext()).forTable(bulkInfo.tableId)
                 .overlapping(startRow, 
null).checkConsistency().fetch(PREV_ROW).build()) {
-          return 
tabletsMetadata.stream().map(TabletMetadata::getExtent).iterator();
+          return tabletsMetadata.stream().map(TabletMetadata::getExtent)

Review Comment:
   Would be nice to avoid buffering the data to a list. Could do the following.
   
    * Modify TabletIterFactory to make it closeable
    * Make the TabletIterFactory impl track everything it needs to close in a 
list
    * In TabletIterFactory.close, close everything in the list.
    * Create the TabletIterFactory in the existnig try w/ resources in the code.



##########
core/src/main/java/org/apache/accumulo/core/util/Merge.java:
##########
@@ -256,7 +257,7 @@ protected Iterator<Size> getSizeIterator(AccumuloClient 
client, String tablename
       return tablets.stream().map(tm -> {
         long size = 
tm.getFilesMap().values().stream().mapToLong(DataFileValue::getSize).sum();
         return new Size(tm.getExtent(), size);
-      }).iterator();
+      }).collect(Collectors.toList()).iterator();

Review Comment:
   This getSizeIterator method is only used in one place.  Could possibly 
inline it and create the TabletMetadata in a try w/ resource in that method.  
Thinking that could avoid buffering the data into a list.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to