How to change StreamingContext batch duration after loading from checkpoint

2015-12-07 Thread yam
Is there a way to change the streaming context batch interval after reloading from checkpoint? I would like to be able to change the batch interval after restarting the application without loosing the checkpoint of course. Thanks! -- View this message in context:

Implementing fail-fast upon critical spark streaming tasks errors

2015-12-06 Thread yam
When a spark streaming task is failed (after exceeding spark.task.maxFailures), the related batch job is considered failed and the driver continues to the next batch in the pipeline after updating checkpoint to the next checkpoint positions (the new offsets when using Kafka direct streaming). I'm

fail-fast or retry failed spark streaming jobs

2015-12-06 Thread yam
There are cases where spark streaming job tasks fails (one, several or all tasks) and there's not much sense in progressing to the next job while discarding the failed one. For example, when failing to connect to remote target DB, I would like to either fail-fast and relaunch the application from