Hi all,
We're running Spark 1.5.0 on EMR 4.1.0 in AWS and consuming from Kinesis.

We saw the following exception today - it killed the Spark "step":

org.apache.spark.SparkException: Could not read until the end sequence
number of the range

We guessed it was because our Kinesis stream didn't have enough shards and
we were being throttled.  We bumped the number of shards and haven't seen
the problem again over the past several hours, but I am curious: does this
sound like the actual reason?  Was bumping shard # appropriate?

I'd be curious to hear if anyone else experienced this issue and knows
exactly what the problem is.

Thanks,
Alan

Reply via email to