[jira] [Assigned] (SPARK-41811) Implement SparkSession.sql's string formatter
[ https://issues.apache.org/jira/browse/SPARK-41811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon reassigned SPARK-41811: Assignee: Ruifeng Zheng > Implement SparkSession.sql's string formatter > - > > Key: SPARK-41811 > URL: https://issues.apache.org/jira/browse/SPARK-41811 > Project: Spark > Issue Type: New Feature > Components: Connect >Affects Versions: 4.0.0 >Reporter: Hyukjin Kwon >Assignee: Ruifeng Zheng >Priority: Major > Labels: pull-request-available > > {code} > ** > File "/.../spark/python/pyspark/sql/connect/session.py", line 345, in > pyspark.sql.connect.session.SparkSession.sql > Failed example: > spark.sql( > "SELECT * FROM range(10) WHERE id > {bound1} AND id < {bound2}", > bound1=7, bound2=9 > ).show() > Exception raised: > Traceback (most recent call last): > File "/.../miniconda3/envs/python3.9/lib/python3.9/doctest.py", line > 1336, in __run > exec(compile(example.source, filename, "single", > File "", line > 1, in > spark.sql( > TypeError: sql() got an unexpected keyword argument 'bound1' > ** > File "/.../spark/python/pyspark/sql/connect/session.py", line 355, in > pyspark.sql.connect.session.SparkSession.sql > Failed example: > spark.sql( > "SELECT {col} FROM {mydf} WHERE id IN {x}", > col=mydf.id, mydf=mydf, x=tuple(range(4))).show() > Exception raised: > Traceback (most recent call last): > File "/.../miniconda3/envs/python3.9/lib/python3.9/doctest.py", line > 1336, in __run > exec(compile(example.source, filename, "single", > File "", line > 1, in > spark.sql( > TypeError: sql() got an unexpected keyword argument 'col' > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-41811) Implement SparkSession.sql's string formatter
[ https://issues.apache.org/jira/browse/SPARK-41811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ruifeng Zheng reassigned SPARK-41811: - Assignee: Ruifeng Zheng > Implement SparkSession.sql's string formatter > - > > Key: SPARK-41811 > URL: https://issues.apache.org/jira/browse/SPARK-41811 > Project: Spark > Issue Type: Sub-task > Components: Connect >Affects Versions: 3.4.0 >Reporter: Hyukjin Kwon >Assignee: Ruifeng Zheng >Priority: Major > Fix For: 3.5.0 > > > {code} > ** > File "/.../spark/python/pyspark/sql/connect/session.py", line 345, in > pyspark.sql.connect.session.SparkSession.sql > Failed example: > spark.sql( > "SELECT * FROM range(10) WHERE id > {bound1} AND id < {bound2}", > bound1=7, bound2=9 > ).show() > Exception raised: > Traceback (most recent call last): > File "/.../miniconda3/envs/python3.9/lib/python3.9/doctest.py", line > 1336, in __run > exec(compile(example.source, filename, "single", > File "", line > 1, in > spark.sql( > TypeError: sql() got an unexpected keyword argument 'bound1' > ** > File "/.../spark/python/pyspark/sql/connect/session.py", line 355, in > pyspark.sql.connect.session.SparkSession.sql > Failed example: > spark.sql( > "SELECT {col} FROM {mydf} WHERE id IN {x}", > col=mydf.id, mydf=mydf, x=tuple(range(4))).show() > Exception raised: > Traceback (most recent call last): > File "/.../miniconda3/envs/python3.9/lib/python3.9/doctest.py", line > 1336, in __run > exec(compile(example.source, filename, "single", > File "", line > 1, in > spark.sql( > TypeError: sql() got an unexpected keyword argument 'col' > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org