waitingkuo opened a new issue, #3997:
URL: https://github.com/apache/arrow-datafusion/issues/3997

   
   
   `date_part` currently  returns `i32` which might lose some information, e.g.
   
   ```bash
   ❯ select date_part('second', timestamp '2000-01-01T00:00:00.1');
   +--------------------------------------------------------+
   | datepart(Utf8("second"),Utf8("2000-01-01T00:00:00.1")) |
   +--------------------------------------------------------+
   | 0                                                      |
   +--------------------------------------------------------+
   1 row in set. Query took 0.000 seconds.
   ```
   
   while postgresql returns double precision
   ```bash
   willy=# select date_part('second', timestamp '2000-01-01T00:00:00.1');
    date_part 
   -----------
          0.1
   (1 row)
   ```
   
   
   
   
   
   is it recommended to follow postgresql's return type?
   if it is preferred, perhaps we could do a separate pr to deal with this
   
   change it to double precision
   
   **Describe the solution you'd like**
   A clear and concise description of what you want to happen.
   
   **Describe alternatives you've considered**
   A clear and concise description of any alternative solutions or features 
you've considered.
   
   perhaps we could consider return decimals instead to align with extract to 
return decimal. postgres doc mentions this:
   ```
   For historical reasons, the date_part function returns values of type double 
precision. 
   This can result in a loss of precision in certain uses. Using extract is 
recommended instead.
   ```
   https://www.postgresql.org/docs/current/functions-datetime.html
   
   **Additional context**
   Add any other context or screenshots about the feature request here.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to