Hi,

It is challenging to make a recommendation without further details. I am
guessing you are trying to build a fault-tolerant spark application (spark
structured streaming) that consumes messages from Solace?
To address *NullPointerException* in the context of the provided
information, you need to review the part of the code where the exception is
thrown and identifying which object or method call is resulting in *null* can
help the debugging process plus checking the logs.

HTH

Mich Talebzadeh,
Dad | Technologist | Solutions Architect | Engineer
London
United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sat, 10 Feb 2024 at 05:29, nayan sharma <nayansharm...@gmail.com> wrote:

> Hi Users,
>
> I am trying to build fault tolerant spark solace consumer.
>
> Issue :- we have to take restart of the job due to multiple issue load
> average is one of them. At that time whatever spark is processing or
> batches in the queue is lost. We can't replay it because we already had
> send ack while calling store().
>
> Solution:- I have tried implementing WAL and checkpointing in the
> solution. Job is able to identify the lost batches, records are not being
> written in the log file but throwing NPE.
>
> We are creating sparkcontext using sc.getorcreate()
>
>
> Thanks,
> Nayan
>

Reply via email to