andygrove opened a new issue, #3088:
URL: https://github.com/apache/datafusion-comet/issues/3088

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `date_format_class` function, 
causing queries using this function to fall back to Spark's JVM execution 
instead of running natively on DataFusion.
   
   The DateFormatClass expression formats timestamp values into string 
representations using customizable date/time format patterns. It provides SQL 
function `date_format` that converts timestamp data to formatted strings 
according to specified formatting patterns.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   date_format(timestamp_expr, format_string)
   ```
   
   ```scala
   // DataFrame API
   df.select(date_format(col("timestamp_column"), "yyyy-MM-dd HH:mm:ss"))
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | left (timestamp_expr) | TimestampType | The timestamp value to be 
formatted |
   | right (format_string) | StringType | The format pattern string (e.g., 
"yyyy-MM-dd", "MM/dd/yyyy HH:mm") |
   | timeZoneId | Option[String] | Optional timezone identifier for formatting 
(internal parameter) |
   
   **Return Type:** StringType - Returns a UTF8String containing the formatted 
timestamp representation.
   
   **Supported Data Types:**
   - **Input**: TimestampType for the timestamp value, StringType with 
collation support for the format pattern
   - **Output**: StringType (UTF8String)
   
   **Edge Cases:**
   - **Null handling**: Returns null if either timestamp or format string is 
null (nullIntolerant = true)
   - **Invalid format patterns**: May throw runtime exceptions for malformed 
format strings
   - **Timezone awareness**: Uses provided timezone or falls back to system 
default
   - **Legacy format support**: Maintains compatibility with SimpleDateFormat 
patterns through LegacyDateFormats
   
   **Examples:**
   ```sql
   -- Format timestamp as date string
   SELECT date_format(current_timestamp(), 'yyyy-MM-dd') as formatted_date;
   
   -- Format with custom pattern
   SELECT date_format(timestamp_col, 'MM/dd/yyyy HH:mm:ss') as custom_format
   FROM events_table;
   
   -- Format with different patterns
   SELECT 
     date_format(created_at, 'yyyy') as year,
     date_format(created_at, 'MMMM') as month_name
   FROM transactions;
   ```
   
   ```scala
   // DataFrame API usage
   import org.apache.spark.sql.functions._
   
   // Basic date formatting
   df.select(date_format(col("timestamp"), "yyyy-MM-dd"))
   
   // Multiple format patterns
   df.select(
     date_format(col("created_at"), "yyyy-MM-dd").as("date"),
     date_format(col("created_at"), "HH:mm:ss").as("time")
   )
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Medium
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.DateFormatClass`
   
   **Related:**
   - UnixTimestamp - Convert formatted strings back to timestamps
   - FromUnixTime - Format Unix timestamps to strings  
   - DateAdd/DateSub - Date arithmetic operations
   - ToDate/ToTimestamp - Date/timestamp conversion functions
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to