Yeah, without actually seeing what's happening on that line, it'd be
difficult to say for sure.
You can check what patches HortonWorks applied, or/and ask them.
And yeah, seg fault is totally possible on any size of the data. But you
should've seen it in the `stdout` (assuming that the regular
Hi Vadim thanks I use HortonWorks package. I dont think there are any seg
faults are dataframe I am trying to write is very small in size. Can it
still create seg fault?
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
Who's your spark provider? EMR, Azure, Databricks, etc.? Maybe contact
them, since they've probably applied some patches
Also have you checked `stdout` for some Segfaults? I vaguely remember
getting `Task failed while writing rows at` and seeing some segfaults that
caused that
On Wed, Feb 28,
Hi thanks Vadim you are right I saw that line already 468 I dont see any code
it is just comment yes I am sure I am using all spark-* jar which is built
for spark 2.2.0 and Scala 2.11. I am also stuck unfortunately with these
errors not sure how to solve them.
--
Sent from:
I'm sorry, didn't see `Caused by:
java.lang.NullPointerException at
org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:468)`
Are you sure that you use 2.2.0?
I don't see any code on that line
Hi thanks for the reply I only see NPE and Task failed while writing rows all
over places I dont see any other errors expect SparkException job aborted
and followed by two exception I pasted earlier.
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
guide. I am using Spark 2.2.0.
>
> df.write.format("parquet").mode(SaveMode.Append);
>
> org.apache.spark.SparkException: Task failed while writing rows at
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$
> spark$sql$execution$data
Hi I am getting the following exception when I try to write DataFrame using
the following code. Please guide. I am using Spark 2.2.0.
df.write.format("parquet").mode(SaveMode.Append);
org.apache.spark.SparkException: Task failed while wr
format("org.apache.spark.sql.hive.orc.DefaultSource").options(options).saveAsTable("personhivetable")
Getting below error :
1. org.apache.spark.SparkException: Task failed while writing rows.
2. at
org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.org