xinrong-databricks opened a new pull request, #37168: URL: https://github.com/apache/spark/pull/37168
### What changes were proposed in this pull request? pandas scalars are not reimplemented in pandas API on Spark intentionally. Users may use pandas scalars in pandas API on Spark directly. However, error messages are confusing when users mistakenly assume pandas scalars are reimplemented, for example, calling `ps.Timedelta`. ### Why are the changes needed? Better error messages should be clear and tell how to fix the errors. That can enhance usability, debuggability, and furthermore, user adoption. ### Does this PR introduce _any_ user-facing change? Yes. Error messages change. For example: ### How was this patch tested? Unit tests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org