Thanks for the quick and clear response! I now have a better understanding
of what is going on regarding the driver and worker nodes which will help me
greatly.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Writing-to-RabbitMQ-tp11283p12386.html
Sent
looks like I can use the .repartition(1) method to stuff
>> everything
>> in one partition so that gets rid of the duplicate messages I send to
>> RabbitMQ but that seems like a bad idea perhaps. Wouldn't that hurt
>> scalability?
>>
>>
>>
>>
>>
epartition(1) method to stuff
> everything
> in one partition so that gets rid of the duplicate messages I send to
> RabbitMQ but that seems like a bad idea perhaps. Wouldn't that hurt
> scalability?
>
>
>
>
>
> --
> View this message in context:
> http://apache-
-list.1001560.n3.nabble.com/Writing-to-RabbitMQ-tp11283p12324.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail:
t;Dev")
val SQLCollection = db("SQLCalls")
SQLCollection += MongoDBObject("Event" -> "Page Hit",
"URL" -> URL,
"Avg number of SQL Calls" -> avgNumberSQlCalls,
"Avg Page Load Time&quo
ollection += MongoDBObject("URL" -> URL(0).substring(7,
> URL(0).length - 1),
> "Avg Page
> Load Time" -> avg)
>
> val toBuildJSON = Seq(baseMsg, avg.toString, closingBrace)
> val byteArray
L" -> URL(0).substring(7,
URL(0).length - 1),
"Avg Page Load
Time" -> avg)
val toBuildJSON = Seq(baseMsg, avg.toString, closingBrace)
val byteArray = toBuildJSON.mkString.getBytes()
SQLChannel.basi
dispatch.Mailbox.run(Mailbox.scala:219)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(F
e-spark-user-list.1001560.n3.nabble.com/Writing-to-RabbitMQ-tp11283.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additio