viirya commented on code in PR #1074:
URL: https://github.com/apache/datafusion-comet/pull/1074#discussion_r1837088965
##########
native/spark-expr/src/cast.rs:
##########
@@ -811,6 +819,40 @@ fn is_datafusion_spark_compatible(
}
}
+/// Cast between struct types based on logic in
+/// `org.apache.spark.sql.catalyst.expressions.Cast#castStruct`.
+///
+/// This can change the types of fields within the struct as well as drop
struct fields. The
+/// `from_type` and `to_type` do not need to have the same number of fields,
but the `from_type`
+/// must have at least as many fields as the `to_type`.
+fn cast_struct_to_struct(
+ array: &StructArray,
+ from_type: &DataType,
+ to_type: &DataType,
+ eval_mode: EvalMode,
+ timezone: String,
+ allow_incompat: bool,
+) -> DataFusionResult<ArrayRef> {
+ match (from_type, to_type) {
+ (DataType::Struct(from_fields), DataType::Struct(to_fields)) => {
+ assert!(to_fields.len() <= from_fields.len());
Review Comment:
Hmm, why we have this assert? In Spark, `Cast` expression requires
from_fields length equal to to_fields length. So we shouldn't encounter the
case that they are not equal on an analyzed query plan.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]