Yeah, this really shouldn't be recursive. It can't be optimized since it's not a final/private method. I think you're welcome to try a PR to un-recursivize it.
On Thu, Apr 16, 2015 at 7:31 PM, Jeff Nadler <jnad...@srcginc.com> wrote: > > I've got a Kafka topic on which lots of data has built up, and a streaming > app with a rate limit. > During maintenance for example records will build up on Kafka and we'll burn > them off on restart. The rate limit keeps the job stable while burning off > the backlog. > > Sometimes on the first or second interval that gets data after a restart, > the receiver dies with this error. At the moment, it's happening every > time we try to start the application. Any ideas? > > 15/04/16 10:41:50 ERROR KafkaReceiver: Error handling message; exiting > > java.lang.StackOverflowError > > at > org.apache.spark.streaming.receiver.RateLimiter.waitToPush(RateLimiter.scala:66) > > at > org.apache.spark.streaming.receiver.RateLimiter.waitToPush(RateLimiter.scala:66) > > at > org.apache.spark.streaming.receiver.RateLimiter.waitToPush(RateLimiter.scala:66) > > ...thousands of lines like that > > > Side note, any idea why the scala compiler isn't optimizing waitToPush into > a loop? Looks like tail recursion, no? > > > Thanks- > > Jeff Nadler > > > --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org