Folks,
I had some pyspark code which used to hang with no useful debug logs. It
got fixed when I changed my code to keep the sparkcontext forever instead
of stopping it and then creating another one later. Is this a bug or
expected behavior?
Mohit.
Hi,
We observed strange behabiour of Spark 0.9.0 when using sc.stop().
We have a bunch of applications that perform some jobs and then issue
sc.stop() at the end of main. Most of the time, everything works as
desired, but sometimes the applications get marked as FAILED by the
master and all
You should always call sc.stop(), so it cleans up state and does not fill
up your disk over time. The strange behavior you observe is mostly benign,
as it only occurs after you have supposedly finished all of your work with
the SparkContext. I am not aware of a bug in Spark that causes this
No exceptions in any logs. No errors in stdout or stderr.
2014-05-22 11:21 GMT+02:00 Andrew Or and...@databricks.com:
You should always call sc.stop(), so it cleans up state and does not fill
up your disk over time. The strange behavior you observe is mostly benign,
as it only occurs after