[jira] [Assigned] (SPARK-37738) PySpark date_add only accepts an integer as it's second parameter

2021-12-30 Thread Maciej Szymkiewicz (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Szymkiewicz reassigned SPARK-37738:
--

Assignee: Daniel Davies

> PySpark date_add only accepts an integer as it's second parameter
> -
>
> Key: SPARK-37738
> URL: https://issues.apache.org/jira/browse/SPARK-37738
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, SQL
>Affects Versions: 3.2.0
>Reporter: Daniel Davies
>Assignee: Daniel Davies
>Priority: Minor
> Fix For: 3.3.0
>
>
> Hello,
> I have a quick question regarding the PySpark date_add function (and it's 
> related functions I guess). Using date_add as an example, the PySpark API 
> takes a [column, and an int as it's second parameter.|#L2203]]
> This feels a bit weird, since the underlying SQL expression can take a column 
> as the second parameter also- in fact, to my limited understanding, the scala 
> [API 
> itself|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L3114]
>  just calls lit on this second parameter anyway. Is there a reason date_add 
> doesn't support a column type as the second parameter in PySpark?
> This isn't a major issue, as the alternative is of course to just use 
> date_add in an expr statement- I just wondered what the usability is being 
> traded off for. I'm happy to contribute a PR if this is something that would 
> be worthwhile pursuing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-37738) PySpark date_add only accepts an integer as it's second parameter

2021-12-30 Thread Maciej Szymkiewicz (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Szymkiewicz reassigned SPARK-37738:
--

Assignee: (was: Maciej Szymkiewicz)

> PySpark date_add only accepts an integer as it's second parameter
> -
>
> Key: SPARK-37738
> URL: https://issues.apache.org/jira/browse/SPARK-37738
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, SQL
>Affects Versions: 3.2.0
>Reporter: Daniel Davies
>Priority: Minor
> Fix For: 3.3.0
>
>
> Hello,
> I have a quick question regarding the PySpark date_add function (and it's 
> related functions I guess). Using date_add as an example, the PySpark API 
> takes a [column, and an int as it's second parameter.|#L2203]]
> This feels a bit weird, since the underlying SQL expression can take a column 
> as the second parameter also- in fact, to my limited understanding, the scala 
> [API 
> itself|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L3114]
>  just calls lit on this second parameter anyway. Is there a reason date_add 
> doesn't support a column type as the second parameter in PySpark?
> This isn't a major issue, as the alternative is of course to just use 
> date_add in an expr statement- I just wondered what the usability is being 
> traded off for. I'm happy to contribute a PR if this is something that would 
> be worthwhile pursuing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-37738) PySpark date_add only accepts an integer as it's second parameter

2021-12-30 Thread Maciej Szymkiewicz (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Szymkiewicz reassigned SPARK-37738:
--

Assignee: Maciej Szymkiewicz

> PySpark date_add only accepts an integer as it's second parameter
> -
>
> Key: SPARK-37738
> URL: https://issues.apache.org/jira/browse/SPARK-37738
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, SQL
>Affects Versions: 3.2.0
>Reporter: Daniel Davies
>Assignee: Maciej Szymkiewicz
>Priority: Minor
> Fix For: 3.3.0
>
>
> Hello,
> I have a quick question regarding the PySpark date_add function (and it's 
> related functions I guess). Using date_add as an example, the PySpark API 
> takes a [column, and an int as it's second parameter.|#L2203]]
> This feels a bit weird, since the underlying SQL expression can take a column 
> as the second parameter also- in fact, to my limited understanding, the scala 
> [API 
> itself|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L3114]
>  just calls lit on this second parameter anyway. Is there a reason date_add 
> doesn't support a column type as the second parameter in PySpark?
> This isn't a major issue, as the alternative is of course to just use 
> date_add in an expr statement- I just wondered what the usability is being 
> traded off for. I'm happy to contribute a PR if this is something that would 
> be worthwhile pursuing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-37738) PySpark date_add only accepts an integer as it's second parameter

2021-12-27 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-37738:


Assignee: Apache Spark

> PySpark date_add only accepts an integer as it's second parameter
> -
>
> Key: SPARK-37738
> URL: https://issues.apache.org/jira/browse/SPARK-37738
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, SQL
>Affects Versions: 3.2.0
>Reporter: Daniel Davies
>Assignee: Apache Spark
>Priority: Minor
>
> Hello,
> I have a quick question regarding the PySpark date_add function (and it's 
> related functions I guess). Using date_add as an example, the PySpark API 
> takes a [column, and an int as it's second parameter.|#L2203]]
> This feels a bit weird, since the underlying SQL expression can take a column 
> as the second parameter also- in fact, to my limited understanding, the scala 
> [API 
> itself|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L3114]
>  just calls lit on this second parameter anyway. Is there a reason date_add 
> doesn't support a column type as the second parameter in PySpark?
> This isn't a major issue, as the alternative is of course to just use 
> date_add in an expr statement- I just wondered what the usability is being 
> traded off for. I'm happy to contribute a PR if this is something that would 
> be worthwhile pursuing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-37738) PySpark date_add only accepts an integer as it's second parameter

2021-12-27 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-37738:


Assignee: (was: Apache Spark)

> PySpark date_add only accepts an integer as it's second parameter
> -
>
> Key: SPARK-37738
> URL: https://issues.apache.org/jira/browse/SPARK-37738
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, SQL
>Affects Versions: 3.2.0
>Reporter: Daniel Davies
>Priority: Minor
>
> Hello,
> I have a quick question regarding the PySpark date_add function (and it's 
> related functions I guess). Using date_add as an example, the PySpark API 
> takes a [column, and an int as it's second parameter.|#L2203]]
> This feels a bit weird, since the underlying SQL expression can take a column 
> as the second parameter also- in fact, to my limited understanding, the scala 
> [API 
> itself|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L3114]
>  just calls lit on this second parameter anyway. Is there a reason date_add 
> doesn't support a column type as the second parameter in PySpark?
> This isn't a major issue, as the alternative is of course to just use 
> date_add in an expr statement- I just wondered what the usability is being 
> traded off for. I'm happy to contribute a PR if this is something that would 
> be worthwhile pursuing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org