Hi,

I'd like to have the other optional columns in Aggregated Metrics by
Executor table per stage in web UI. I can easily have Shuffle Read
Size / Records and Shuffle Write Size / Records columns.

scala> sc.parallelize(0 to 9).map((_,1)).groupBy(_._1).count

I can't seem to figure out what Spark job to execute to have Input
Size / Records and Output Size / Records + Shuffle Spill (Memory) and
Shuffle Spill (Disk) columns.

Any ideas? Thanks!

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to