m: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Wednesday, September 16, 2015 12:24 PM
To: Samya MAITI
Cc: user@spark.apache.org
Subject: Re: Getting parent RDD
How many RDDs are you having in that stream? If its a single RDD then you
could do a .foreach and log the message, something lik
!!!
- Sam
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Friday, September 11, 2015 8:05 PM
To: Samya MAITI
Cc: user
Subject: Re: Exception Handling : Spark Streaming
Was your intention that exception from rdd.saveToCassandra() be caught ?
In that case you can place try / catch around that call
From: Cody Koeninger [mailto:c...@koeninger.org]
Sent: Thursday, September 10, 2015 1:13 AM
To: Samya MAITI
Cc: user@spark.apache.org
Subject: Re: Spark streaming -> cassandra : Fault Tolerance
It's been a while since I've looked at the cassandra connector, so I can't give
y
user control?
Regards,
Sam
From: Jem Tucker [mailto:jem.tuc...@gmail.com]
Sent: Wednesday, August 26, 2015 2:26 PM
To: Samya MAITI ; user@spark.apache.org
Subject: Re: Relation between threads and executor core
Hi Samya,
When submitting an application with spark-submit the cores per executor can
Thanks TD.
On Wed, Dec 31, 2014 at 7:19 AM, Tathagata Das
wrote:
> 1. Of course, a single block / partition has many Kafka messages, and
> from different Kafka topics interleaved together. The message count is
> not related to the block count. Any message received within a
> particular block int