As it stands currently, no.
If you're already overriding the dstream, it would be pretty
straightforward to change the kafka parameters used when creating the rdd
for the next batch though
On Wed, Aug 26, 2015 at 11:41 PM, Shushant Arora
wrote:
> Can I change this param fetch.message.max.bytes
Can I change this param fetch.message.max.bytes or
spark.streaming.kafka.maxRatePerPartition
at run time across batches.
Say I detected some fail condition in my system and I decided to sonsume i
next batch interval only 10 messages per partition and if that succeed I
reset the max limit to unlimi
see http://kafka.apache.org/documentation.html#consumerconfigs
fetch.message.max.bytes
in the kafka params passed to the constructor
On Wed, Aug 26, 2015 at 10:39 AM, Shushant Arora
wrote:
> whats the default buffer in spark streaming 1.3 for kafka messages.
>
> Say In this run it has to fet