[ https://issues.apache.org/jira/browse/SPARK-24673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16526307#comment-16526307 ]
Antonio Murgia commented on SPARK-24673: ---------------------------------------- Looks doable. Should I go with a method overload, resulting in: {code:java} functions.from_utc_timestamp(ts: Column, tz: String) functions.from_utc_timestamp(ts: Column, tz: Column) {code} Or is there some limitation I am not aware of? Also do you think {code:java} to_utc_timestamp{code} should receive the same treatment? > scala sql function from_utc_timestamp second argument could be Column instead > of String > --------------------------------------------------------------------------------------- > > Key: SPARK-24673 > URL: https://issues.apache.org/jira/browse/SPARK-24673 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 2.3.1 > Reporter: Antonio Murgia > Priority: Minor > > As of 2.3.1 the scala API for the built-in function from_utc_timestamp > (org.apache.spark.sql.functions#from_utc_timestamp) is less powerful than its > SQL counter part. In particular, given a dataset/dataframe with the following > schema: > {code:java} > CREATE TABLE MY_TABLE ( > ts TIMESTAMP, > tz STRING > ){code} > from the SQL api I can do something like: > {code:java} > SELECT FROM_UTC_TIMESTAMP(TS, TZ){code} > while from the programmatic api I simply cannot because > {code:java} > functions.from_utc_timestamp(ts: Column, tz: String){code} > second argument is a String. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org