I've got a Kafka topic on which lots of data has built up, and a streaming
app with a rate limit.
During maintenance for example records will build up on Kafka and we'll
burn them off on restart.  The rate limit keeps the job stable while
burning off the backlog.

Sometimes on the first or second interval that gets data after a restart,
the receiver dies with this error.   At the moment, it's happening every
time we try to start the application.   Any ideas?

15/04/16 10:41:50 ERROR KafkaReceiver: Error handling message; exiting

java.lang.StackOverflowError

at
org.apache.spark.streaming.receiver.RateLimiter.waitToPush(RateLimiter.scala:66)

at
org.apache.spark.streaming.receiver.RateLimiter.waitToPush(RateLimiter.scala:66)

at
org.apache.spark.streaming.receiver.RateLimiter.waitToPush(RateLimiter.scala:66)

...thousands of lines like that


Side note, any idea why the scala compiler isn't optimizing waitToPush into
a loop?  Looks like tail recursion, no?


Thanks-

Jeff Nadler

Reply via email to