Nivas Umapathy created SPARK-34751:
--------------------------------------

             Summary: Parquet with invalid chars on column name reads double as 
null when a clean schema is applied
                 Key: SPARK-34751
                 URL: https://issues.apache.org/jira/browse/SPARK-34751
             Project: Spark
          Issue Type: Bug
          Components: Input/Output
    Affects Versions: 2.4.3
         Environment: Pyspark 2.4.3

AWS Glue Dev Endpoint EMR
            Reporter: Nivas Umapathy
             Fix For: 2.4.8
         Attachments: invalid_columns_double.parquet

I have a parquet file that has data with invalid column names on it. 
[#Reference](https://issues.apache.org/jira/browse/SPARK-27442)  Here is the 
file [Invalid Header 
Parquet|https://drive.google.com/file/d/101WNWXnPwhjocSMVjkhn5jo85Ri_NydP/view?usp=sharing].

I tried to load this file with 

{{df = glue_context.read.parquet('invalid_columns_double.parquet')}}

{{df = df.withColumnRenamed('COL 1', 'COL_1')}}

{{df = df.withColumnRenamed('COL,2', 'COL_2')}}

{{df = df.withColumnRenamed('COL;3', 'COL_3') }}

and so on.

Now if i call

{{df.show()}}

it throws this exception that is still pointing to the old column name.

 {{pyspark.sql.utils.AnalysisException: 'Attribute name "COL 1" contains 
invalid character(s) among " ,;{}()\\n\\t=". Please use alias to rename it.;'}}

 

When i read about it in some blogs, there was suggestion to re-read the same 
parquet with new schema applied. So i did 

{{df = 
glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}}{{}}

 

and it works, but all the data in the dataframe are null. The same works for 
Strings

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to