cecemei commented on code in PR #18950:
URL: https://github.com/apache/druid/pull/18950#discussion_r2752771078


##########
indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java:
##########
@@ -440,8 +448,7 @@ public int getPriority()
   @Override
   public boolean isReady(TaskActionClient taskActionClient) throws Exception
   {
-    final List<DataSegment> segments = 
segmentProvider.findSegments(taskActionClient);
-    return determineLockGranularityAndTryLockWithSegments(taskActionClient, 
segments, segmentProvider::checkSegments);
+    return determineLockGranularityAndTryLock(taskActionClient, 
List.of(segmentProvider.interval));

Review Comment:
   removed determineLockGranularityAndTryLockWithSegments



##########
indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java:
##########
@@ -440,8 +448,7 @@ public int getPriority()
   @Override
   public boolean isReady(TaskActionClient taskActionClient) throws Exception
   {
-    final List<DataSegment> segments = 
segmentProvider.findSegments(taskActionClient);
-    return determineLockGranularityAndTryLockWithSegments(taskActionClient, 
segments, segmentProvider::checkSegments);
+    return determineLockGranularityAndTryLock(taskActionClient, 
List.of(segmentProvider.interval));

Review Comment:
   Yes this is on both path. This would make a different say compaction is for 
a day, but maybe we only have segments for 1 hour. This actually broke msq 
runner since the task lock is not that smart to figure out it's just the same 
thing, but then i felt this might be just before concurrent append & replace 
times that we try to lock less, we should probably move things to the 
concurrent append & replace world.



##########
multi-stage-query/src/main/java/org/apache/druid/msq/indexing/destination/SegmentGenerationUtils.java:
##########
@@ -95,12 +100,30 @@ public static DataSchema makeDataSchemaForIngestion(
             destination.getDimensionSchemas()
         );
 
+    final TransformSpec transformSpec;
+    if (query.getFilter() != null) {
+      List<Transform> transforms = new ArrayList<>();
+      for (VirtualColumn vc : query.getVirtualColumns().getVirtualColumns()) {

Review Comment:
   i think there're some tests covering the case of virtual column thats how i 
discovered this. but yea dataschema is probably only used in compactionstate 
and it doesnt get lots of attention. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to