Re: Using data in RDD to specify HDFS directory to write to

2014-11-17 Thread jschindler
Yes, thank you for suggestion. The error I found below was in the worker logs. AssociationError [akka.tcp://sparkwor...@cloudera01.local.company.com:7078] - [akka.tcp://sparkexecu...@cloudera01.local.company.com:33329]: Error [Association failed with

Re: Using data in RDD to specify HDFS directory to write to

2014-11-15 Thread jschindler
UPDATE I have removed and added things systematically to the job and have figured that the inclusion of the construction of the SparkContext object is what is causing it to fail. The last run contained the code below. I keep losing executors apparently and I'm not sure why. Some of the

Re: Using data in RDD to specify HDFS directory to write to

2014-11-14 Thread jschindler
I reworked my app using your idea of throwing the data in a map. It looks like it should work but I'm getting some strange errors and my job gets terminated. I get a WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered

Using data in RDD to specify HDFS directory to write to

2014-11-12 Thread jschindler
I am having a problem trying to figure out how to solve a problem. I would like to stream events from Kafka to my Spark Streaming app and write the contents of each RDD out to a HDFS directory. Each event that comes into the app via kafka will be JSON and have an event field with the name of the

Re: Writing to RabbitMQ

2014-08-19 Thread jschindler
Thanks for the quick and clear response! I now have a better understanding of what is going on regarding the driver and worker nodes which will help me greatly. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Writing-to-RabbitMQ-tp11283p12386.html Sent

Re: Writing to RabbitMQ

2014-08-18 Thread jschindler
Well, it looks like I can use the .repartition(1) method to stuff everything in one partition so that gets rid of the duplicate messages I send to RabbitMQ but that seems like a bad idea perhaps. Wouldn't that hurt scalability? -- View this message in context:

Re: Writing to RabbitMQ

2014-08-05 Thread jschindler
You are correct in that I am trying to publish inside of a foreachRDD loop. I am currently refactoring and will try publishing inside the foreachPartition loop. Below is the code showing the way it is currently written, thanks! object myData { def main(args: Array[String]) { val ssc =

Re: Spark Streaming Error Help - ERROR actor.OneForOneStrategy: key not found:

2014-07-03 Thread jschindler
I think I have found my answers but if anyone has thoughts please share. After testing for a while I think the error doesn't have any effect on the process. I think it is the case that there must be elements left in the window from last run otherwise my system is completely whack. Please let me