andygrove opened a new issue, #3089:
URL: https://github.com/apache/datafusion-comet/issues/3089

   ## What is the problem the feature request solves?
   
   > **Note:** This issue was generated with AI assistance. The specification 
details have been extracted from Spark documentation and may need verification.
   
   Comet does not currently support the Spark `date_from_unix_date` function, 
causing queries using this function to fall back to Spark's JVM execution 
instead of running natively on DataFusion.
   
   The `DateFromUnixDate` expression converts a Unix date integer (days since 
epoch) to a SQL DATE value. It takes an integer representing the number of days 
since January 1, 1970 (Unix epoch) and returns the corresponding date.
   
   Supporting this expression would allow more Spark workloads to benefit from 
Comet's native acceleration.
   
   ## Describe the potential solution
   
   ### Spark Specification
   
   **Syntax:**
   ```sql
   date_from_unix_date(unix_date)
   ```
   
   **Arguments:**
   | Argument | Type | Description |
   |----------|------|-------------|
   | unix_date | INTEGER | Number of days since Unix epoch (1970-01-01) |
   
   **Return Type:** `DATE` - Returns a date value representing the calculated 
date.
   
   **Supported Data Types:**
   - Input: `IntegerType` only
   - The expression uses implicit casting to convert compatible numeric types 
to integers
   
   **Edge Cases:**
   - **Null handling**: Returns null for null input values (nullIntolerant = 
true means null inputs produce null outputs)
   - **Integer overflow**: Subject to standard integer overflow limitations of 
the input type  
   - **Negative values**: Accepts negative integers representing dates before 
Unix epoch
   - **Out of range dates**: May produce invalid dates for extremely large 
positive or negative integers
   
   **Examples:**
   ```sql
   -- Convert Unix date to actual date
   SELECT date_from_unix_date(1);
   -- Result: 1970-01-02
   
   -- Convert multiple Unix dates
   SELECT date_from_unix_date(0), date_from_unix_date(365), 
date_from_unix_date(-1);  
   -- Result: 1970-01-01, 1971-01-01, 1969-12-31
   
   -- Handle null input
   SELECT date_from_unix_date(NULL);
   -- Result: NULL
   ```
   
   ```scala
   // DataFrame API usage
   import org.apache.spark.sql.functions.expr
   
   df.select(expr("date_from_unix_date(unix_day_column)"))
   
   // Using with literal values
   df.select(expr("date_from_unix_date(1)").as("epoch_plus_one"))
   ```
   
   ### Implementation Approach
   
   See the [Comet guide on adding new 
expressions](https://datafusion.apache.org/comet/contributor-guide/adding_a_new_expression.html)
 for detailed instructions.
   
   1. **Scala Serde**: Add expression handler in 
`spark/src/main/scala/org/apache/comet/serde/`
   2. **Register**: Add to appropriate map in `QueryPlanSerde.scala`
   3. **Protobuf**: Add message type in `native/proto/src/proto/expr.proto` if 
needed
   4. **Rust**: Implement in `native/spark-expr/src/` (check if DataFusion has 
built-in support first)
   
   
   ## Additional context
   
   **Difficulty:** Medium
   **Spark Expression Class:** 
`org.apache.spark.sql.catalyst.expressions.DateFromUnixDate`
   
   **Related:**
   - `unix_date()` - Converts date to Unix date integer (inverse operation)
   - `date_add()` - Adds days to a date  
   - `from_unixtime()` - Converts Unix timestamp to timestamp value
   - `to_date()` - Converts string to date value
   
   ---
   *This issue was auto-generated from Spark reference documentation.*
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to