[jira] [Commented] (SPARK-34751) Parquet with invalid chars on column name reads double as null when a clean schema is applied
[ https://issues.apache.org/jira/browse/SPARK-34751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332546#comment-17332546 ] Nivas Umapathy commented on SPARK-34751: [~el-aasi] Unfortunately there is no work around. > Parquet with invalid chars on column name reads double as null when a clean > schema is applied > - > > Key: SPARK-34751 > URL: https://issues.apache.org/jira/browse/SPARK-34751 > Project: Spark > Issue Type: Bug > Components: Input/Output >Affects Versions: 2.4.3, 3.1.1 > Environment: Pyspark 2.4.3 > AWS Glue Dev Endpoint EMR >Reporter: Nivas Umapathy >Priority: Major > Attachments: invalid_columns_double.parquet > > > I have a parquet file that has data with invalid column names on it. > [#Reference](https://issues.apache.org/jira/browse/SPARK-27442) Here is the > file attached with this ticket. > I tried to load this file with > {{df = glue_context.read.parquet('invalid_columns_double.parquet')}} > {{df = df.withColumnRenamed('COL 1', 'COL_1')}} > {{df = df.withColumnRenamed('COL,2', 'COL_2')}} > {{df = df.withColumnRenamed('COL;3', 'COL_3') }} > and so on. > Now if i call > {{df.show()}} > it throws this exception that is still pointing to the old column name. > {{pyspark.sql.utils.AnalysisException: 'Attribute name "COL 1" contains > invalid character(s) among " ,;{}()}} > {{n}} > {{t=". Please use alias to rename it.;'}} > > When i read about it in some blogs, there was suggestion to re-read the same > parquet with new schema applied. So i did > {{df = > glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}} > > and it works, but all the data in the dataframe are null. The same works for > String datatypes > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34751) Parquet with invalid chars on column name reads double as null when a clean schema is applied
[ https://issues.apache.org/jira/browse/SPARK-34751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331898#comment-17331898 ] Gais El-AAsi commented on SPARK-34751: -- [~toocoolblue2000] do you happen to have a temp workaround? The error also persists on Azure Synapse which uses spark v. 2.4.4.2.6.99.201-34744923 > Parquet with invalid chars on column name reads double as null when a clean > schema is applied > - > > Key: SPARK-34751 > URL: https://issues.apache.org/jira/browse/SPARK-34751 > Project: Spark > Issue Type: Bug > Components: Input/Output >Affects Versions: 2.4.3, 3.1.1 > Environment: Pyspark 2.4.3 > AWS Glue Dev Endpoint EMR >Reporter: Nivas Umapathy >Priority: Major > Attachments: invalid_columns_double.parquet > > > I have a parquet file that has data with invalid column names on it. > [#Reference](https://issues.apache.org/jira/browse/SPARK-27442) Here is the > file attached with this ticket. > I tried to load this file with > {{df = glue_context.read.parquet('invalid_columns_double.parquet')}} > {{df = df.withColumnRenamed('COL 1', 'COL_1')}} > {{df = df.withColumnRenamed('COL,2', 'COL_2')}} > {{df = df.withColumnRenamed('COL;3', 'COL_3') }} > and so on. > Now if i call > {{df.show()}} > it throws this exception that is still pointing to the old column name. > {{pyspark.sql.utils.AnalysisException: 'Attribute name "COL 1" contains > invalid character(s) among " ,;{}()}} > {{n}} > {{t=". Please use alias to rename it.;'}} > > When i read about it in some blogs, there was suggestion to re-read the same > parquet with new schema applied. So i did > {{df = > glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}} > > and it works, but all the data in the dataframe are null. The same works for > String datatypes > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34751) Parquet with invalid chars on column name reads double as null when a clean schema is applied
[ https://issues.apache.org/jira/browse/SPARK-34751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303001#comment-17303001 ] Nivas Umapathy commented on SPARK-34751: the schema is extracted from the same file, before materializing the data df = glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet') ^ by schema I meant this. The file was written out from pandas dataframe. Here is a link to my databricks notebook https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/5940072345564347/3863439224328194/623184285031795/latest.html > Parquet with invalid chars on column name reads double as null when a clean > schema is applied > - > > Key: SPARK-34751 > URL: https://issues.apache.org/jira/browse/SPARK-34751 > Project: Spark > Issue Type: Bug > Components: Input/Output >Affects Versions: 2.4.3, 3.1.1 > Environment: Pyspark 2.4.3 > AWS Glue Dev Endpoint EMR >Reporter: Nivas Umapathy >Priority: Major > Attachments: invalid_columns_double.parquet > > > I have a parquet file that has data with invalid column names on it. > [#Reference](https://issues.apache.org/jira/browse/SPARK-27442) Here is the > file attached with this ticket. > I tried to load this file with > {{df = glue_context.read.parquet('invalid_columns_double.parquet')}} > {{df = df.withColumnRenamed('COL 1', 'COL_1')}} > {{df = df.withColumnRenamed('COL,2', 'COL_2')}} > {{df = df.withColumnRenamed('COL;3', 'COL_3') }} > and so on. > Now if i call > {{df.show()}} > it throws this exception that is still pointing to the old column name. > {{pyspark.sql.utils.AnalysisException: 'Attribute name "COL 1" contains > invalid character(s) among " ,;{}()}} > {{n}} > {{t=". Please use alias to rename it.;'}} > > When i read about it in some blogs, there was suggestion to re-read the same > parquet with new schema applied. So i did > {{df = > glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}} > > and it works, but all the data in the dataframe are null. The same works for > String datatypes > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34751) Parquet with invalid chars on column name reads double as null when a clean schema is applied
[ https://issues.apache.org/jira/browse/SPARK-34751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17302988#comment-17302988 ] Takeshi Yamamuro commented on SPARK-34751: -- Could you describe more to reproduce the issue? what's a schema of the parquet file and how did you write the parquet file, brabrabra. > Parquet with invalid chars on column name reads double as null when a clean > schema is applied > - > > Key: SPARK-34751 > URL: https://issues.apache.org/jira/browse/SPARK-34751 > Project: Spark > Issue Type: Bug > Components: Input/Output >Affects Versions: 2.4.3, 3.1.1 > Environment: Pyspark 2.4.3 > AWS Glue Dev Endpoint EMR >Reporter: Nivas Umapathy >Priority: Major > Attachments: invalid_columns_double.parquet > > > I have a parquet file that has data with invalid column names on it. > [#Reference](https://issues.apache.org/jira/browse/SPARK-27442) Here is the > file attached with this ticket. > I tried to load this file with > {{df = glue_context.read.parquet('invalid_columns_double.parquet')}} > {{df = df.withColumnRenamed('COL 1', 'COL_1')}} > {{df = df.withColumnRenamed('COL,2', 'COL_2')}} > {{df = df.withColumnRenamed('COL;3', 'COL_3') }} > and so on. > Now if i call > {{df.show()}} > it throws this exception that is still pointing to the old column name. > {{pyspark.sql.utils.AnalysisException: 'Attribute name "COL 1" contains > invalid character(s) among " ,;{}()}} > {{n}} > {{t=". Please use alias to rename it.;'}} > > When i read about it in some blogs, there was suggestion to re-read the same > parquet with new schema applied. So i did > {{df = > glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}} > > and it works, but all the data in the dataframe are null. The same works for > String datatypes > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34751) Parquet with invalid chars on column name reads double as null when a clean schema is applied
[ https://issues.apache.org/jira/browse/SPARK-34751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17302672#comment-17302672 ] Nivas Umapathy commented on SPARK-34751: I ran it on 3.1.1 and it still has the same problem. All column values are null > Parquet with invalid chars on column name reads double as null when a clean > schema is applied > - > > Key: SPARK-34751 > URL: https://issues.apache.org/jira/browse/SPARK-34751 > Project: Spark > Issue Type: Bug > Components: Input/Output >Affects Versions: 2.4.3 > Environment: Pyspark 2.4.3 > AWS Glue Dev Endpoint EMR >Reporter: Nivas Umapathy >Priority: Major > Attachments: invalid_columns_double.parquet > > > I have a parquet file that has data with invalid column names on it. > [#Reference](https://issues.apache.org/jira/browse/SPARK-27442) Here is the > file attached with this ticket. > I tried to load this file with > {{df = glue_context.read.parquet('invalid_columns_double.parquet')}} > {{df = df.withColumnRenamed('COL 1', 'COL_1')}} > {{df = df.withColumnRenamed('COL,2', 'COL_2')}} > {{df = df.withColumnRenamed('COL;3', 'COL_3') }} > and so on. > Now if i call > {{df.show()}} > it throws this exception that is still pointing to the old column name. > {{pyspark.sql.utils.AnalysisException: 'Attribute name "COL 1" contains > invalid character(s) among " ,;{}()}} > {{n}} > {{t=". Please use alias to rename it.;'}} > > When i read about it in some blogs, there was suggestion to re-read the same > parquet with new schema applied. So i did > {{df = > glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}} > > and it works, but all the data in the dataframe are null. The same works for > String datatypes > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-34751) Parquet with invalid chars on column name reads double as null when a clean schema is applied
[ https://issues.apache.org/jira/browse/SPARK-34751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17302573#comment-17302573 ] Takeshi Yamamuro commented on SPARK-34751: -- Could you try newer Spark, e.g., 2.4.7, 3.0.2, or 3.1.1? > Parquet with invalid chars on column name reads double as null when a clean > schema is applied > - > > Key: SPARK-34751 > URL: https://issues.apache.org/jira/browse/SPARK-34751 > Project: Spark > Issue Type: Bug > Components: Input/Output >Affects Versions: 2.4.3 > Environment: Pyspark 2.4.3 > AWS Glue Dev Endpoint EMR >Reporter: Nivas Umapathy >Priority: Major > Fix For: 2.4.8 > > Attachments: invalid_columns_double.parquet > > > I have a parquet file that has data with invalid column names on it. > [#Reference](https://issues.apache.org/jira/browse/SPARK-27442) Here is the > file attached with this ticket. > I tried to load this file with > {{df = glue_context.read.parquet('invalid_columns_double.parquet')}} > {{df = df.withColumnRenamed('COL 1', 'COL_1')}} > {{df = df.withColumnRenamed('COL,2', 'COL_2')}} > {{df = df.withColumnRenamed('COL;3', 'COL_3') }} > and so on. > Now if i call > {{df.show()}} > it throws this exception that is still pointing to the old column name. > {{pyspark.sql.utils.AnalysisException: 'Attribute name "COL 1" contains > invalid character(s) among " ,;{}()}} > {{n}} > {{t=". Please use alias to rename it.;'}} > > When i read about it in some blogs, there was suggestion to re-read the same > parquet with new schema applied. So i did > {{df = > glue_context.read.schema(df.schema).parquet(}}{{'invalid_columns_double.parquet')}} > > and it works, but all the data in the dataframe are null. The same works for > String datatypes > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org