[ https://issues.apache.org/jira/browse/SPARK-19304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15848042#comment-15848042 ]
Gaurav Shah commented on SPARK-19304: ------------------------------------- [~srowen] tried working through the code but unsure how to write clean code in this state. `KinesisBackedBlockRDD.getPartitions` returns one partition per block. But if we have one partition per block, we are unable to optimise the `getRecords` call since we have generally one sequence number per block. Tried modifying `KinesisBackedBlockRDD.getPartitions` to return one partition for validBlockIds & one partition for invalidBlockIds. That makes `KinesisBackedBlockRDDPartition` look odd since it now accepts an array of `blockIds` but is inheriting from `BlockRdd` which is one per block. Tried this path & this is working. Also got down the recovery time to same as batch processing time. Any direction where I can improve the code piece ? I can open a pull request to see the differences > Kinesis checkpoint recovery is 10x slow > --------------------------------------- > > Key: SPARK-19304 > URL: https://issues.apache.org/jira/browse/SPARK-19304 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 2.0.0 > Environment: using s3 for checkpoints using 1 executor, with 19g mem > & 3 cores per executor > Reporter: Gaurav Shah > Labels: kinesis > > Application runs fine initially, running batches of 1hour and the processing > time is less than 30 minutes on average. For some reason lets say the > application crashes, and we try to restart from checkpoint. The processing > now takes forever and does not move forward. We tried to test out the same > thing at batch interval of 1 minute, the processing runs fine and takes 1.2 > minutes for batch to finish. When we recover from checkpoint it takes about > 15 minutes for each batch. Post the recovery the batches again process at > normal speed > I suspect the KinesisBackedBlockRDD used for recovery is causing the slowdown. > Stackoverflow post with more details: > http://stackoverflow.com/questions/38390567/spark-streaming-checkpoint-recovery-is-very-very-slow -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org