Executors in spark streaming 1.3 fetch messages from kafka in batches and
what happens when executor takes longer time to complete a fetch batch

say in


directKafkaStream.foreachRDD(new Function<JavaRDD<byte[][]>, Void>() {

@Override
public Void call(JavaRDD<byte[][]> v1) throws Exception {
v1.foreachPartition(new  VoidFunction<Iterator<byte[][]>>{
@Override
public void call(Iterator<byte[][]> t) throws Exception {
//long running task
}});}});

Will this long running task drops the connectio of executor with kafka
brokers-
And how to handle that. I am getting Connection tmeout in my code.

Reply via email to