Spark workers keep getting disconnected(Keep dying) from the cluster.

2014-05-16 Thread Ravi Hemnani
Hey, I am facing a weird issue. My spark workers keep dying every now and then and in the master logs i keep on seeing following messages, 14/05/14 10:09:24 WARN Master: Removing worker-20140514080546-x.x.x.x-50737 because we got no heartbeat in 60 seconds 14/05/14 14:18:41 WARN Master:

RE: JMX with Spark

2014-04-25 Thread Ravi Hemnani
Can you share your working metrics.properties.? I want remote jmx to be enabled so i need to use the JMXSink and monitor my spark master and workers. But what are the parameters that are to be defined like host and port ? So your config can help. -- View this message in context:

Re: How to use FlumeInputDStream in spark cluster?

2014-03-21 Thread Ravi Hemnani
Hey, Even i am getting the same error. I am running, sudo ./run-example org.apache.spark.streaming.examples.FlumeEventCount spark://spark_master_hostname:7077 spark_master_hostname 7781 and getting no events in the spark streaming. --- Time:

Re: How to use FlumeInputDStream in spark cluster?

2014-03-21 Thread Ravi Hemnani
On 03/21/2014 06:17 PM, anoldbrain [via Apache Spark User List] wrote: he actual address, which in turn causes the 'Fail to bind to ...' error. This comes naturally because the slave that is running the code to bind to address:port has a different ip. So if we run the code on the slave where

Re: How to use FlumeInputDStream in spark cluster?

2014-03-21 Thread Ravi Hemnani
On 03/21/2014 06:17 PM, anoldbrain [via Apache Spark User List] wrote: he actual address, which in turn causes the 'Fail to bind to ...' error. This comes naturally because the slave that is running the code to bind to address:port has a different ip. I ran sudo ./run-example

Using flume to create stream for spark streaming.

2014-03-10 Thread Ravi Hemnani
Hey, I am using the following flume flow, Flume agent 1 consisting of Rabbitmq- source, files- channet, avro- sink sending data to a slave node of spark cluster. Flume agent 2, slave node of spark cluster, consisting of avro- source, files- channel, now for the sink i tried avro, hdfs,