Hi all,
Please share if anyone have faced the same problem. There are many similar
issues on web but I did not find any solution and reason why this happens.
It will be really helpful.
Regards,
Prateek
On Mon, Apr 29, 2019 at 3:18 PM Prateek Rajput
wrote:
> I checked and removed 0 sized files th
I checked and removed 0 sized files then also it is coming. And sometimes
when there is no 0 size file then also it is happening.
I checked data also if it is corrupted by directly opening that file and
checking it. I traced whole data but did not find any issue. For hadoop
Map-Reduce no such issue
This can happen if the file size is 0
On Mon, Apr 29, 2019 at 2:28 PM Prateek Rajput
wrote:
> Hi guys,
> I am getting this strange error again and again while reading from from a
> sequence file in spark.
> User class threw exception: org.apache.spark.SparkException: Job aborted.
> at
> org.apac
Hi guys,
I am getting this strange error again and again while reading from from a
sequence file in spark.
User class threw exception: org.apache.spark.SparkException: Job aborted.
at
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
at
org.apache.spark.rdd.PairRDDF