Re: Spark and HDFS ( Worker and Data Nodes Combination )

2015-06-22 Thread ayan guha
I have a basic qs: how spark assigns partition to an executor? Does it respect data locality? Does this behaviour depend on cluster manager, ie yarn vs standalone? On 22 Jun 2015 22:45, Akhil Das ak...@sigmoidanalytics.com wrote: Option 1 should be fine, Option 2 would bound a lot on network as

Re: Registering custom metrics

2015-06-22 Thread Silvio Fiorito
Hi Gerard, Yes, you have to implement your own custom Metrics Source using the Code Hale library. See here for some examples: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/metrics/source/JvmSource.scala

Re: Registering custom metrics

2015-06-22 Thread Silvio Fiorito
Sorry, replied to Gerard’s question vs yours. See here: Yes, you have to implement your own custom Metrics Source using the Code Hale library. See here for some examples: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/metrics/source/JvmSource.scala

<    1   2