[ 
https://issues.apache.org/jira/browse/SPARK-43411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rudhra Raveendran resolved SPARK-43411.
---------------------------------------
    Resolution: Won't Fix

Turns out this issue isn't present in later versions of Spark (I just tested on 
the latest version in Synapse, which is 3.3.1, and it worked). I'm assuming 
since this version is EOL this will be a wontfix, hence closing this out.

> Can't union dataframes with # in subcolumn name
> -----------------------------------------------
>
>                 Key: SPARK-43411
>                 URL: https://issues.apache.org/jira/browse/SPARK-43411
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.1.2
>         Environment: * Azure Synapse Notebooks
>  * Apache Spark Pool: [Azure Synapse Runtime for Apache Spark 3.1 (EOLA) - 
> Azure Synapse Analytics | Microsoft 
> Learn|https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-3-runtime]
>  ** Spark 3.1.2
>  ** Ubuntu 18.04
>  ** Python 3.8
>  ** Scala 2.12.10
>  ** Hadoop 3.1.1
>  ** Java 1.8.0_282
>  ** .NET Core 3.1
>  ** .NET for Apache Spark 2.0.0
>  ** Delta Lake 1.0
>            Reporter: Rudhra Raveendran
>            Priority: Major
>
> I was using Spark within an Azure Synapse notebook to load dataframes from 
> various storage accounts and union them into a single dataframe, but it seems 
> to fail as the SQL internal to union doesn't handle special characters 
> properly. Here is a code example of what I was running:
> {code:java}
> val data1 = spark.read.parquet("abfss://PATH1")
> val data2 = spark.read.parquet("abfss://PATH2")
> val data3 = spark.read.parquet("abfss://PATH3")
> val data4 = spark.read.parquet("abfss://PATH4")
> val data = data1
> .unionByName(data2, allowMissingColumns=true)
> .unionByName(data3, allowMissingColumns=true)
> .unionByName(data4, allowMissingColumns=true)
> data.printSchema() {code}
> The issue arose due to having a StructType column, e.g. ABC, that has a 
> subcolumn with # in the name, e.g. #XYZ#. This doesn't seem to be a problem 
> outright, as other Spark functions like select work fine:
> {code:java}
> data1.select("ABC.#XYZ#").where(col("#XYZ#").isNotNull).show(5, truncate = 
> false) {code}
> However, when I ran the earlier snippet with the union statements, I get this 
> error:
> {code:java}
> org.apache.spark.sql.catalyst.parser.ParseException:
> extraneous input '#' expecting {'ADD', 'AFTER', 'ALL', 'ALTER', 'ANALYZE', 
> 'AND', 'ANTI', 'ANY', 'ARCHIVE', 'ARRAY', 'AS', 'ASC', 'AT', 'AUTHORIZATION', 
> 'BETWEEN', 'BOTH', 'BUCKET', 'BUCKETS', 'BY', 'CACHE', 'CASCADE', 'CASE', 
> 'CAST', 'CHANGE', 'CHECK', 'CLEAR', 'CLUSTER', 'CLUSTERED', 'CODEGEN', 
> 'COLLATE', 'COLLECTION', 'COLUMN', 'COLUMNS', 'COMMENT', 'COMMIT', 'COMPACT', 
> 'COMPACTIONS', 'COMPUTE', 'CONCATENATE', 'CONSTRAINT', 'COST', 'CREATE', 
> 'CROSS', 'CUBE', 'CURRENT', 'CURRENT_DATE', 'CURRENT_TIME', 
> 'CURRENT_TIMESTAMP', 'CURRENT_USER', 'DATA', 'DATABASE', DATABASES, 
> 'DBPROPERTIES', 'DEFINED', 'DELETE', 'DELIMITED', 'DESC', 'DESCRIBE', 'DFS', 
> 'DIRECTORIES', 'DIRECTORY', 'DISTINCT', 'DISTRIBUTE', 'DIV', 'DROP', 'ELSE', 
> 'END', 'ESCAPE', 'ESCAPED', 'EXCEPT', 'EXCHANGE', 'EXISTS', 'EXPLAIN', 
> 'EXPORT', 'EXTENDED', 'EXTERNAL', 'EXTRACT', 'FALSE', 'FETCH', 'FIELDS', 
> 'FILTER', 'FILEFORMAT', 'FIRST', 'FOLLOWING', 'FOR', 'FOREIGN', 'FORMAT', 
> 'FORMATTED', 'FROM', 'FULL', 'FUNCTION', 'FUNCTIONS', 'GLOBAL', 'GRANT', 
> 'GROUP', 'GROUPING', 'HAVING', 'IF', 'IGNORE', 'IMPORT', 'IN', 'INDEX', 
> 'INDEXES', 'INNER', 'INPATH', 'INPUTFORMAT', 'INSERT', 'INTERSECT', 
> 'INTERVAL', 'INTO', 'IS', 'ITEMS', 'JOIN', 'KEYS', 'LAST', 'LATERAL', 'LAZY', 
> 'LEADING', 'LEFT', 'LIKE', 'LIMIT', 'LINES', 'LIST', 'LOAD', 'LOCAL', 
> 'LOCATION', 'LOCK', 'LOCKS', 'LOGICAL', 'MACRO', 'MAP', 'MATCHED', 'MERGE', 
> 'MSCK', 'NAMESPACE', 'NAMESPACES', 'NATURAL', 'NO', NOT, 'NULL', 'NULLS', 
> 'OF', 'ON', 'ONLY', 'OPTION', 'OPTIONS', 'OR', 'ORDER', 'OUT', 'OUTER', 
> 'OUTPUTFORMAT', 'OVER', 'OVERLAPS', 'OVERLAY', 'OVERWRITE', 'PARTITION', 
> 'PARTITIONED', 'PARTITIONS', 'PERCENT', 'PIVOT', 'PLACING', 'POSITION', 
> 'PRECEDING', 'PRIMARY', 'PRINCIPALS', 'PROPERTIES', 'PURGE', 'QUERY', 
> 'RANGE', 'RECORDREADER', 'RECORDWRITER', 'RECOVER', 'REDUCE', 'REFERENCES', 
> 'REFRESH', 'RENAME', 'REPAIR', 'REPLACE', 'RESET', 'RESTRICT', 'REVOKE', 
> 'RIGHT', RLIKE, 'ROLE', 'ROLES', 'ROLLBACK', 'ROLLUP', 'ROW', 'ROWS', 
> 'SCHEMA', 'SELECT', 'SEMI', 'SEPARATED', 'SERDE', 'SERDEPROPERTIES', 
> 'SESSION_USER', 'SET', 'MINUS', 'SETS', 'SHOW', 'SKEWED', 'SOME', 'SORT', 
> 'SORTED', 'START', 'STATISTICS', 'STORED', 'STRATIFY', 'STRUCT', 'SUBSTR', 
> 'SUBSTRING', 'TABLE', 'TABLES', 'TABLESAMPLE', 'TBLPROPERTIES', TEMPORARY, 
> 'TERMINATED', 'THEN', 'TIME', 'TO', 'TOUCH', 'TRAILING', 'TRANSACTION', 
> 'TRANSACTIONS', 'TRANSFORM', 'TRIM', 'TRUE', 'TRUNCATE', 'TYPE', 'UNARCHIVE', 
> 'UNBOUNDED', 'UNCACHE', 'UNION', 'UNIQUE', 'UNKNOWN', 'UNLOCK', 'UNSET', 
> 'UPDATE', 'USE', 'USER', 'USING', 'VALUES', 'VIEW', 'VIEWS', 'WHEN', 'WHERE', 
> 'WINDOW', 'WITH', 'ZONE', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 0)
>  
> == SQL ==
> #XYZ#
> ^^^ {code}
> This seems to indicate the issue is with the implementation of 
> unionByName/union etc. however I'm not familiar enough with the codebase to 
> figure out where this would be an issue (I was able to trace that UnionByName 
> calls on Union which I think is defined here: 
> [spark/basicLogicalOperators.scala at master · apache/spark · 
> GitHub|https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to