To work around an out of space issue in a Direct Kafka Streaming
application we create topics with a low retention policy (retention.ms=30)
which works fine from Kafka perspective. However this results
into OffsetOutOfRangeException in Spark job (red line below). Is there any
configuration in
o handle this and not have my spark job crash? I have no
>> option of increasing the kafka retention period.
>>
>> I tried to have the DStream returned by createDirectStream() wrapped in a
>> Try construct, but since the exception happens in the executor, the Try
>> cons
If you have a reproduction you should open a JIRA. It would be great if
there is a fix. I'm just saying I know a similar issue does not exist in
structured streaming.
On Fri, Mar 10, 2017 at 7:46 AM, Justin Miller <
justin.mil...@protectwise.com> wrote:
> Hi Michael,
>
> I'm experiencing a simi
tion period.
>
> I tried to have the DStream returned by createDirectStream() wrapped in a
> Try construct, but since the exception happens in the executor, the Try
> construct didn't take effect. Do you have any ideas of how to handle this?
>
>
>
> --
> View this
ve any ideas of how to handle this?
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/How-to-gracefully-handle-Kafka-
> OffsetOutOfRangeException-tp26534.html
> Sent from the Apache Spark User List mailing list archive at
@n3.nabble.com> wrote:
> Did you find out how ?
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-
> gracefully-handle-Kafka-OffsetOutOfRangeExcept
; and
> >> > not kill the job. I want to keep ignoring these exceptions, as some
> >> > other
> >> > partitions are fine and I am okay with data loss.
> >> >
> >> > Is there any way to handle this and not have my spark job crash? I
> ha
not have my spark job crash? I have
>> > no
>> > option of increasing the kafka retention period.
>> >
>> > I tried to have the DStream returned by createDirectStream() wrapped in
>> > a
>> > Try construct, b
Try construct, but since the exception happens in the executor, the Try
> > construct didn't take effect. Do you have any ideas of how to handle
> this?
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabbl
t:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-gracefully-handle-Kafka-OffsetOutOfRangeException-tp26534.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
w-to-gracefully-handle-Kafka-OffsetOutOfRangeException-tp26534.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands,
t take effect. Do you have any ideas of how to handle this?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-gracefully-handle-Kafka-OffsetOutOfRangeException-tp26534.html
> Sent from the Apache Spark User List mailing list archi
lly, when the data is aged off, I get the
> OffsetOutOfRangeException from Kafka, as we would expect. As we work
> towards more efficient processing of that topic, or get more resources, I'd
> like to be able to log the error and continue the application without
> failing. Is there
OffsetOutOfRangeException from Kafka, as we would expect. As we work
towards more efficient processing of that topic, or get more resources, I'd
like to be able to log the error and continue the application without
failing. Is there a place where I can catch that error before it ge
14 matches
Mail list logo