There should be another exception trace (basically, the actual cause) after
this one, could you post it?

On Wed, Feb 28, 2018 at 1:39 PM, unk1102 <umesh.ka...@gmail.com> wrote:

> Hi I am getting the following exception when I try to write DataFrame using
> the following code. Please guide. I am using Spark 2.2.0.
>
> df.write.format("parquet").mode(SaveMode.Append);
>
> org.apache.spark.SparkException: Task failed while writing rows at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$
> spark$sql$execution$datasources$FileFormatWriter$$
> executeTask(FileFormatWriter.scala:270)
> at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$
> write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:189)
> at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$
> write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:188)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at
> org.apache.spark.scheduler.Task.run(Task.scala:108) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) Caused by:
> java.lang.NullPointerException at
> org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:468)
> at
> org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:468)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at
> scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$
> SingleDirectoryWriteTask.execute(FileFormatWriter.scala:324)
> at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$
> apache$spark$sql$execution$datasources$FileFormatWriter$$
> executeTask$3.apply(FileFormatWriter.scala:256)
> at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$
> apache$spark$sql$execution$datasources$FileFormatWriter$$
> executeTask$3.apply(FileFormatWriter.scala:254)
> at
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCa
> llbacks(Utils.scala:1371)
> at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$
> spark$sql$execution$datasources$FileFormatWriter$$
> executeTask(FileFormatWriter.scala:259)
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to