[jira] [Commented] (SPARK-41835) Implement `transform_keys` function

2023-01-12 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676083#comment-17676083
 ] 

Apache Spark commented on SPARK-41835:
--

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/39535

> Implement `transform_keys` function
> ---
>
> Key: SPARK-41835
> URL: https://issues.apache.org/jira/browse/SPARK-41835
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
> line 1611, in pyspark.sql.connect.functions.transform_keys
> Failed example:
>     df.select(transform_keys(
>         "data", lambda k, _: upper(k)).alias("data_upper")
>     ).show(truncate=False)
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File "", line 
> 1, in 
>         df.select(transform_keys(
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 534, in show
>         print(self._show_string(n, truncate, vertical))
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 423, in _show_string
>         ).toPandas()
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1031, in toPandas
>         return self._session.client.to_pandas(query)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 413, in to_pandas
>         return self._execute_and_fetch(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 573, in _execute_and_fetch
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 619, in _handle_error
>         raise SparkConnectAnalysisException(
>     pyspark.sql.connect.client.SparkConnectAnalysisException: 
> [DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE] Cannot resolve 
> "transform_keys(data, lambdafunction(upper(x_11), x_11, y_12))" due to data 
> type mismatch: Parameter 1 requires the "MAP" type, however "data" has the 
> type "STRUCT".
>     Plan: 'Project [transform_keys(data#4493, lambdafunction('upper(lambda 
> 'x_11), lambda 'x_11, lambda 'y_12, false)) AS data_upper#4496]
>     +- Project [0#4488L AS id#4492L, 1#4489 AS data#4493]
>        +- LocalRelation [0#4488L, 1#4489] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41835) Implement `transform_keys` function

2023-01-12 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676082#comment-17676082
 ] 

Apache Spark commented on SPARK-41835:
--

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/39535

> Implement `transform_keys` function
> ---
>
> Key: SPARK-41835
> URL: https://issues.apache.org/jira/browse/SPARK-41835
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
> line 1611, in pyspark.sql.connect.functions.transform_keys
> Failed example:
>     df.select(transform_keys(
>         "data", lambda k, _: upper(k)).alias("data_upper")
>     ).show(truncate=False)
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File "", line 
> 1, in 
>         df.select(transform_keys(
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 534, in show
>         print(self._show_string(n, truncate, vertical))
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 423, in _show_string
>         ).toPandas()
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1031, in toPandas
>         return self._session.client.to_pandas(query)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 413, in to_pandas
>         return self._execute_and_fetch(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 573, in _execute_and_fetch
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 619, in _handle_error
>         raise SparkConnectAnalysisException(
>     pyspark.sql.connect.client.SparkConnectAnalysisException: 
> [DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE] Cannot resolve 
> "transform_keys(data, lambdafunction(upper(x_11), x_11, y_12))" due to data 
> type mismatch: Parameter 1 requires the "MAP" type, however "data" has the 
> type "STRUCT".
>     Plan: 'Project [transform_keys(data#4493, lambdafunction('upper(lambda 
> 'x_11), lambda 'x_11, lambda 'y_12, false)) AS data_upper#4496]
>     +- Project [0#4488L AS id#4492L, 1#4489 AS data#4493]
>        +- LocalRelation [0#4488L, 1#4489] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41835) Implement `transform_keys` function

2023-01-12 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676078#comment-17676078
 ] 

Apache Spark commented on SPARK-41835:
--

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/39535

> Implement `transform_keys` function
> ---
>
> Key: SPARK-41835
> URL: https://issues.apache.org/jira/browse/SPARK-41835
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
> line 1611, in pyspark.sql.connect.functions.transform_keys
> Failed example:
>     df.select(transform_keys(
>         "data", lambda k, _: upper(k)).alias("data_upper")
>     ).show(truncate=False)
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File "", line 
> 1, in 
>         df.select(transform_keys(
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 534, in show
>         print(self._show_string(n, truncate, vertical))
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 423, in _show_string
>         ).toPandas()
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1031, in toPandas
>         return self._session.client.to_pandas(query)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 413, in to_pandas
>         return self._execute_and_fetch(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 573, in _execute_and_fetch
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 619, in _handle_error
>         raise SparkConnectAnalysisException(
>     pyspark.sql.connect.client.SparkConnectAnalysisException: 
> [DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE] Cannot resolve 
> "transform_keys(data, lambdafunction(upper(x_11), x_11, y_12))" due to data 
> type mismatch: Parameter 1 requires the "MAP" type, however "data" has the 
> type "STRUCT".
>     Plan: 'Project [transform_keys(data#4493, lambdafunction('upper(lambda 
> 'x_11), lambda 'x_11, lambda 'y_12, false)) AS data_upper#4496]
>     +- Project [0#4488L AS id#4492L, 1#4489 AS data#4493]
>        +- LocalRelation [0#4488L, 1#4489] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41835) Implement `transform_keys` function

2023-01-12 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676076#comment-17676076
 ] 

Apache Spark commented on SPARK-41835:
--

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/39535

> Implement `transform_keys` function
> ---
>
> Key: SPARK-41835
> URL: https://issues.apache.org/jira/browse/SPARK-41835
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
> line 1611, in pyspark.sql.connect.functions.transform_keys
> Failed example:
>     df.select(transform_keys(
>         "data", lambda k, _: upper(k)).alias("data_upper")
>     ).show(truncate=False)
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File "", line 
> 1, in 
>         df.select(transform_keys(
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 534, in show
>         print(self._show_string(n, truncate, vertical))
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 423, in _show_string
>         ).toPandas()
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1031, in toPandas
>         return self._session.client.to_pandas(query)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 413, in to_pandas
>         return self._execute_and_fetch(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 573, in _execute_and_fetch
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 619, in _handle_error
>         raise SparkConnectAnalysisException(
>     pyspark.sql.connect.client.SparkConnectAnalysisException: 
> [DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE] Cannot resolve 
> "transform_keys(data, lambdafunction(upper(x_11), x_11, y_12))" due to data 
> type mismatch: Parameter 1 requires the "MAP" type, however "data" has the 
> type "STRUCT".
>     Plan: 'Project [transform_keys(data#4493, lambdafunction('upper(lambda 
> 'x_11), lambda 'x_11, lambda 'y_12, false)) AS data_upper#4496]
>     +- Project [0#4488L AS id#4492L, 1#4489 AS data#4493]
>        +- LocalRelation [0#4488L, 1#4489] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41835) Implement `transform_keys` function

2023-01-12 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17676075#comment-17676075
 ] 

Apache Spark commented on SPARK-41835:
--

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/39535

> Implement `transform_keys` function
> ---
>
> Key: SPARK-41835
> URL: https://issues.apache.org/jira/browse/SPARK-41835
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
>
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
> line 1611, in pyspark.sql.connect.functions.transform_keys
> Failed example:
>     df.select(transform_keys(
>         "data", lambda k, _: upper(k)).alias("data_upper")
>     ).show(truncate=False)
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File "", line 
> 1, in 
>         df.select(transform_keys(
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 534, in show
>         print(self._show_string(n, truncate, vertical))
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 423, in _show_string
>         ).toPandas()
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1031, in toPandas
>         return self._session.client.to_pandas(query)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 413, in to_pandas
>         return self._execute_and_fetch(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 573, in _execute_and_fetch
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 619, in _handle_error
>         raise SparkConnectAnalysisException(
>     pyspark.sql.connect.client.SparkConnectAnalysisException: 
> [DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE] Cannot resolve 
> "transform_keys(data, lambdafunction(upper(x_11), x_11, y_12))" due to data 
> type mismatch: Parameter 1 requires the "MAP" type, however "data" has the 
> type "STRUCT".
>     Plan: 'Project [transform_keys(data#4493, lambdafunction('upper(lambda 
> 'x_11), lambda 'x_11, lambda 'y_12, false)) AS data_upper#4496]
>     +- Project [0#4488L AS id#4492L, 1#4489 AS data#4493]
>        +- LocalRelation [0#4488L, 1#4489] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41835) Implement `transform_keys` function

2023-01-02 Thread Sandeep Singh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17653731#comment-17653731
 ] 

Sandeep Singh commented on SPARK-41835:
---

My bad, error is about expected input types.

> Implement `transform_keys` function
> ---
>
> Key: SPARK-41835
> URL: https://issues.apache.org/jira/browse/SPARK-41835
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
> Fix For: 3.4.0
>
>
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
> line 1611, in pyspark.sql.connect.functions.transform_keys
> Failed example:
>     df.select(transform_keys(
>         "data", lambda k, _: upper(k)).alias("data_upper")
>     ).show(truncate=False)
> Exception raised:
>     Traceback (most recent call last):
>       File 
> "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
>  line 1350, in __run
>         exec(compile(example.source, filename, "single",
>       File "", line 
> 1, in 
>         df.select(transform_keys(
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 534, in show
>         print(self._show_string(n, truncate, vertical))
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 423, in _show_string
>         ).toPandas()
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", 
> line 1031, in toPandas
>         return self._session.client.to_pandas(query)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 413, in to_pandas
>         return self._execute_and_fetch(req)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 573, in _execute_and_fetch
>         self._handle_error(rpc_error)
>       File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/client.py", 
> line 619, in _handle_error
>         raise SparkConnectAnalysisException(
>     pyspark.sql.connect.client.SparkConnectAnalysisException: 
> [DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE] Cannot resolve 
> "transform_keys(data, lambdafunction(upper(x_11), x_11, y_12))" due to data 
> type mismatch: Parameter 1 requires the "MAP" type, however "data" has the 
> type "STRUCT".
>     Plan: 'Project [transform_keys(data#4493, lambdafunction('upper(lambda 
> 'x_11), lambda 'x_11, lambda 'y_12, false)) AS data_upper#4496]
>     +- Project [0#4488L AS id#4492L, 1#4489 AS data#4493]
>        +- LocalRelation [0#4488L, 1#4489] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41835) Implement `transform_keys` function

2023-01-02 Thread Ruifeng Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17653722#comment-17653722
 ] 

Ruifeng Zheng commented on SPARK-41835:
---

this function was added 

> Implement `transform_keys` function
> ---
>
> Key: SPARK-41835
> URL: https://issues.apache.org/jira/browse/SPARK-41835
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41835) Implement `transform_keys` function

2023-01-02 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17653713#comment-17653713
 ] 

Hyukjin Kwon commented on SPARK-41835:
--

test output?

> Implement `transform_keys` function
> ---
>
> Key: SPARK-41835
> URL: https://issues.apache.org/jira/browse/SPARK-41835
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Sandeep Singh
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org