Thanks for the help.
Following are the folders I was trying to write to
*saveAsTextFile("*file:///home/someuser/test2/testupload/20150708/0/")
*saveAsTextFile("f*ile:///home/someuser/test2/testupload/20150708/1/")
*saveAsTextFile("*file:///home/someuser/test2/testupload/20150708/2/")
*saveAsTe
Getting exception when wrting RDD to local disk using following function
saveAsTextFile("file:home/someuser/dir2/testupload/20150708/")
The dir (/home/someuser/dir2/testupload/) was created before running the
job. The error message is misleading.
org.apache.spark.SparkException: Job aborte
andra.
>
> Best
> Ayan
>
> On Mon, May 11, 2015 at 5:03 PM, Akhil Das
> wrote:
>
>> Did you try repartitioning? You might end up with a lot of time spending
>> on GC though.
>>
>> Thanks
>> Best Regards
>>
>> On Fri, May 8, 2015 at 11:59 P
I am using the Spark Cassandra connector to work with a table with 3
million records. Using .where() API to work with only a certain rows in
this table. Where clause filters the data to 1 rows.
CassandraJavaUtil.javaFunctions(sparkContext) .cassandraTable(KEY_SPACE,
MY_TABLE, CassandraJavaUtil