Hi Josh
Is there documentation available for status API? I would like to use it.
Thanks,
Aniket
On Sun Dec 28 2014 at 02:37:32 Josh Rosen rosenvi...@gmail.com wrote:
The console progress bars are implemented on top of a new stable status
API that was added in Spark 1.2. It's possible to
It's accessed through the `statusTracker` field on SparkContext.
*Scala*:
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkStatusTracker
*Java*:
https://spark.apache.org/docs/latest/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
Don't create new
Thanks Josh. Looks promising. I will give it a try.
Thanks,
Aniket
On Mon, Dec 29, 2014, 9:55 PM Josh Rosen rosenvi...@gmail.com wrote:
It's accessed through the `statusTracker` field on SparkContext.
*Scala*:
On Mon, Dec 29, 2014 at 12:00 AM, Eric Friedman eric.d.fried...@gmail.com
wrote:
I'm not seeing RDDs or SRDDs cached in the Spark UI. That page remains
empty despite my calling cache().
Just a small note on this, and perhaps you already know: Calling cache() is
not enough to cache something and
Hey Eric, sounds like you are running into several issues, but thanks for
reporting them. Just to comment on a few of these:
I'm not seeing RDDs or SRDDs cached in the Spark UI. That page remains empty
despite my calling cache().
This is expected until you compute the RDDs the first time
On Dec 29, 2014, at 1:52 PM, Matei Zaharia matei.zaha...@gmail.com wrote:
Hey Eric, sounds like you are running into several issues, but thanks for
reporting them. Just to comment on a few of these:
I'm not seeing RDDs or SRDDs cached in the Spark UI. That page remains empty
despite
Hey Eric,
I'm just curious - which specific features in 1.2 do you find most
help with usability? This is a theme we're focusing on for 1.3 as
well, so it's helpful to hear what makes a difference.
- Patrick
On Sun, Dec 28, 2014 at 1:36 AM, Eric Friedman
eric.d.fried...@gmail.com wrote:
Hi
Hi Patrick,
I don't mean to be glib, but the fact that it works at all on my cluster (600
nodes) and data is a novel experience. This is the first release that I haven't
had to struggle with and then give up entirely. I can , for example, finally
use HiveContext from PySpark on CDH, at least
The console progress bars are implemented on top of a new stable status
API that was added in Spark 1.2. It's possible to query job progress
using this interface (in older versions of Spark, you could implement a
custom SparkListener and maintain the counts of completed / running /
failed tasks /
Spark 1.2.0 is SO much more usable than previous releases -- many thanks to
the team for this release.
A question about progress of actions. I can see how things are progressing
using the Spark UI. I can also see the nice ASCII art animation on the
spark driver console.
Has anyone come up with
10 matches
Mail list logo