[ https://issues.apache.org/jira/browse/SPARK-34751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303001#comment-17303001 ]
Nivas Umapathy commented on SPARK-34751: ---------------------------------------- the schema is extracted from the same file, before materializing the data {{{{df = glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}}}} {{{{ ^^^^^^^^^^^^^^^^^}}}} by schema I meant this. The file was written out from pandas dataframe. Here is a link to my databricks notebook https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/5940072345564347/3863439224328194/623184285031795/latest.html > Parquet with invalid chars on column name reads double as null when a clean > schema is applied > --------------------------------------------------------------------------------------------- > > Key: SPARK-34751 > URL: https://issues.apache.org/jira/browse/SPARK-34751 > Project: Spark > Issue Type: Bug > Components: Input/Output > Affects Versions: 2.4.3, 3.1.1 > Environment: Pyspark 2.4.3 > AWS Glue Dev Endpoint EMR > Reporter: Nivas Umapathy > Priority: Major > Attachments: invalid_columns_double.parquet > > > I have a parquet file that has data with invalid column names on it. > [#Reference](https://issues.apache.org/jira/browse/SPARK-27442) Here is the > file attached with this ticket. > I tried to load this file with > {{df = glue_context.read.parquet('invalid_columns_double.parquet')}} > {{df = df.withColumnRenamed('COL 1', 'COL_1')}} > {{df = df.withColumnRenamed('COL,2', 'COL_2')}} > {{df = df.withColumnRenamed('COL;3', 'COL_3') }} > and so on. > Now if i call > {{df.show()}} > it throws this exception that is still pointing to the old column name. > {{pyspark.sql.utils.AnalysisException: 'Attribute name "COL 1" contains > invalid character(s) among " ,;{}()}} > {{n}} > {{t=". Please use alias to rename it.;'}} > > When i read about it in some blogs, there was suggestion to re-read the same > parquet with new schema applied. So i did > {{df = > glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}} > > and it works, but all the data in the dataframe are null. The same works for > String datatypes > -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org