<aslanim...@gmail.com>
> *Sent:* Wednesday, March 28, 2018 12:22 AM
> *To:* naresh Goud
> *Cc:* user @spark
> *Subject:* Re: java.lang.UnsupportedOperationException: CSV data source
> does not support struct/ERROR RetryingBlockFetcher
>
> Hi Naresh,
>
> Thank y
com>
Sent: Wednesday, March 28, 2018 12:22 AM
To: naresh Goud
Cc: user @spark
Subject: Re: java.lang.UnsupportedOperationException: CSV data source does not
support struct/ERROR RetryingBlockFetcher
Hi Naresh,
Thank you for the quick response, appreciate it.
Removing the option("he
Hi Naresh,
Thank you for the quick response, appreciate it.
Removing the option("header","true") and trying
df = spark.read.parquet("test.parquet"), now can read the parquet works.
However, I would like to find a way to have the data in csv/readable.
still I cannot save df as csv as it throws.
In case of storing as parquet file I don’t think it requires header.
option("header","true")
Give a try by removing header option and then try to read it. I haven’t
tried. Just a thought.
Thank you,
Naresh
On Tue, Mar 27, 2018 at 9:47 PM Mina Aslani wrote:
> Hi,
>
>
>
Hi,
I am using pyspark. To transform my sample data and create model, I use
stringIndexer and OneHotEncoder.
However, when I try to write data as csv using below command
df.coalesce(1).write.option("header","true").mode("overwrite").csv("output.csv")
I get UnsupportedOperationException