gianm commented on code in PR #18232:
URL: https://github.com/apache/druid/pull/18232#discussion_r2239051909
##########
sql/src/test/java/org/apache/druid/sql/calcite/CalciteArraysQueryTest.java:
##########
@@ -3875,7 +3876,7 @@ public void testArrayAggGroupByArrayContainsSubquery()
}
- @NotYetSupported(Modes.UNNEST_INLINED)
+ @NotYetSupported({Modes.UNNEST_INLINED, Modes.DD_UNNEST_INLINED})
Review Comment:
Any idea what all of these various unnest problems are caused by? Seems
unchill.
##########
sql/src/test/java/org/apache/druid/sql/calcite/CalciteNestedDataQueryTest.java:
##########
@@ -883,6 +886,7 @@ public void testGroupByRootSingleTypeString()
);
}
+ @NotYetSupported(Modes.DART_SECOND_SEGMENT_NOT_SCANNED)
Review Comment:
The `DART_SECOND_SEGMENT_NOT_SCANNED` failures are happening due to a
problem in `TestTimelineServerView`-- when building a timeline, it exits after
adding the first segment. If this is fixed then this `@NotYetSupported` mode
won't be needed.
##########
extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/logical/stages/SegmentMapStage.java:
##########
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.msq.logical.stages;
+
+import org.apache.druid.msq.exec.StageProcessor;
+import org.apache.druid.msq.logical.LogicalInputSpec;
+import org.apache.druid.msq.logical.StageMaker;
+import
org.apache.druid.sql.calcite.planner.querygen.DruidQueryGenerator.DruidNodeStack;
+import
org.apache.druid.sql.calcite.planner.querygen.SourceDescProducer.SourceDesc;
+
+import java.util.List;
+
+public class SegmentMapStage extends AbstractFrameProcessorStage
Review Comment:
Does this existing mean we are creating a stage specifically for applying
segment map fns? It seems inefficient, because they could be folded into the
next stage. Are you thinking the folding is going to be done with some future
work?
Also, does this stage shuffle? I am not quite seeing where that gets set up.
##########
extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/logical/StageMaker.java:
##########
@@ -91,13 +95,8 @@ private StageDefinitionBuilder
buildStageInternal(LogicalStage stage)
private StageDefinitionBuilder
buildFrameProcessorStage(AbstractFrameProcessorStage frameProcessorStage)
{
List<LogicalInputSpec> inputs = frameProcessorStage.getInputSpecs();
- List<InputSpec> inputSpecs = new ArrayList<>();
- for (LogicalInputSpec dagInputSpec : inputs) {
- inputSpecs.add(dagInputSpec.toInputSpec(this));
- }
+ StageDefinitionBuilder sdb = newStageDefinitionBuilder(inputs);
StageProcessor<?, ?> stageProcessor =
frameProcessorStage.buildStageProcessor(this);
- StageDefinitionBuilder sdb = newStageDefinitionBuilder();
- sdb.inputs(inputSpecs);
sdb.signature(frameProcessorStage.getLogicalRowSignature());
sdb.processor(stageProcessor);
sdb.shuffleSpec(MixShuffleSpec.instance());
Review Comment:
the Mix spec takes everything down to a single partition, which we want to
avoid if there might be a lot of data. Another option is to just have _no_
shuffle spec, which means the output partitions are the same as the input
partitions. In principle stages with no shuffle spec could be folded into the
next stage, although that doesn't happen yet.
Anyway, maybe that makes more sense for the SegmentMapStage?
##########
sql/src/test/java/org/apache/druid/sql/calcite/CalciteArraysQueryTest.java:
##########
@@ -2651,6 +2651,7 @@ public void testArrayAggMultiValue()
);
}
+ @NotYetSupported(Modes.DD_RESULT_MISMATCH_FLOAT_DOUBLE)
Review Comment:
The result has a `0.1` in it which cannot be represented exactly as a float.
I feel like this could be fixed with a results verifier that has some looseness
for verifying floating points. Round to 6 decimal places for floats, 15 for
doubles?
Or, we could include the rounding in the SQL itself, similar to
https://github.com/apache/druid/pull/18183.
##########
sql/src/test/java/org/apache/druid/sql/calcite/CalciteJoinQueryTest.java:
##########
@@ -5697,7 +5723,7 @@ public void
testPlanWithInFilterMoreThanInSubQueryThreshold()
);
}
- @NotYetSupported(Modes.SORT_REMOVE_TROUBLE)
+ @NotYetSupported({Modes.SORT_REMOVE_TROUBLE, Modes.DD_SORT_REMOVE_TROUBLE})
Review Comment:
The error message is like "Calcite assertion violated", do we have a buggy
rule?
##########
sql/src/test/java/org/apache/druid/sql/calcite/CalciteJoinQueryTest.java:
##########
@@ -5135,6 +5156,7 @@ public void testCountOnSemiJoinSingleColumn(Map<String,
Object> queryContext)
);
}
+ @NotYetSupported(Modes.DD_RESTRICTED_DATASOURCE_SUPPORT2)
Review Comment:
What is the problem here? The base test is expecting an error about how the
rhs of a join can't be restricted, but I think the annotation says that in
decoupled-dart, we see "Unauthorized" instead? Why is that?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]