Tobias,

Thanks for your help. I think in my case, the batch size is 1 minute.
However, it takes my program more than 1 minute to process 1 minute's data.
I am not sure whether it is because the unprocessed data pile up. Do you
have an suggestion on how to check it and solve it? Thanks!

Bill


On Sun, Jun 29, 2014 at 7:18 PM, Tobias Pfeiffer <t...@preferred.jp> wrote:

> Bill,
>
> were you able to process all information in time, or did maybe some
> unprocessed data pile up? I think when I saw this once, the reason
> seemed to be that I had received more data than would fit in memory,
> while waiting for processing, so old data was deleted. When it was
> time to process that data, it didn't exist any more. Is that a
> possible reason in your case?
>
> Tobias
>
> On Sat, Jun 28, 2014 at 5:59 AM, Bill Jay <bill.jaypeter...@gmail.com>
> wrote:
> > Hi,
> >
> > I am running a spark streaming job with 1 minute as the batch size. It
> ran
> > around 84 minutes and was killed because of the exception with the
> following
> > information:
> >
> > java.lang.Exception: Could not compute split, block input-0-1403893740400
> > not found
> >
> >
> > Before it was killed, it was able to correctly generate output for each
> > batch.
> >
> > Any help on this will be greatly appreciated.
> >
> > Bill
> >
>

Reply via email to