[ https://issues.apache.org/jira/browse/SPARK-19275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015591#comment-16015591 ]
Aaquib Khwaja commented on SPARK-19275: --------------------------------------- Hi [~dmitry_iii], I also ran into a similar issue. I've set the value of 'spark.streaming.kafka.consumer.poll.ms' as 60000, but i'm still running into issues. Here is the stack trace and other details: http://stackoverflow.com/questions/44045323/sparkstreamingkafka-failed-to-get-records-after-polling-for-60000 Also, below are some relevant configs: batch.interval = 60s spark.streaming.kafka.consumer.poll.ms = 60000 session.timeout.ms = 60000 (default: 30000) heartbeat.interval.ms = 6000 (default: 3000) request.timeout.ms = 90000 (default: 40000) Any help would be great ! Thanks. > Spark Streaming, Kafka receiver, "Failed to get records for ... after polling > for 512" > -------------------------------------------------------------------------------------- > > Key: SPARK-19275 > URL: https://issues.apache.org/jira/browse/SPARK-19275 > Project: Spark > Issue Type: Bug > Components: DStreams > Affects Versions: 2.0.0 > Environment: Apache Spark 2.0.0, Kafka 0.10 for Scala 2.11 > Reporter: Dmitry Ochnev > > We have a Spark Streaming application reading records from Kafka 0.10. > Some tasks are failed because of the following error: > "java.lang.AssertionError: assertion failed: Failed to get records for (...) > after polling for 512" > The first attempt fails and the second attempt (retry) completes > successfully, - this is the pattern that we see for many tasks in our logs. > These fails and retries consume resources. > A similar case with a stack trace are described here: > https://www.mail-archive.com/user@spark.apache.org/msg56564.html > https://gist.github.com/SrikanthTati/c2e95c4ac689cd49aab817e24ec42767 > Here is the line from the stack trace where the error is raised: > org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:74) > We tried several values for "spark.streaming.kafka.consumer.poll.ms", - 2, 5, > 10, 30 and 60 seconds, but the error appeared in all the cases except the > last one. Moreover, increasing the threshold led to increasing total Spark > stage duration. > In other words, increasing "spark.streaming.kafka.consumer.poll.ms" led to > fewer task failures but with cost of total stage duration. So, it is bad for > performance when processing data streams. > We have a suspicion that there is a bug in CachedKafkaConsumer (and/or other > related classes) which inhibits the reading process. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org