something else in your job causing a problem. Have you tried other
> operations on the data, like count(), or saving synthetic datasets (e.g.
> sc.parallelize(1 to 100*1000*1000, 20).saveAsTextFile(...)?
>
> Matei
>
> On August 25, 2014 at 12:09:25 PM, amnonkhen ([hidden email]
>
Hi jerryye,
Maybe if you voted up my question on Stack Overflow it would get some
traction and we would get nearer to a solution.
Thanks,
Amnon
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/saveAsTextFile-to-s3-on-spark-does-not-work-just-hangs-tp7
I am loading a csv text file from s3 into spark, filtering and mapping the
records and writing the result to s3.
I have tried several input sizes: 100k rows, 1M rows & 3.5M rows. The former
two finish successfully while the latter (3.5M rows) hangs in some weird
state in which the job stages monit