Yes I agree that blacklisting structure can be put in the user-defined
state but still the state would remain open for a long time, right? Am I
misunderstanding something?
I like the idea of blacklisting in a "Broadcast" variable but I can't
figure out how to use the "Broadcast" variable in the
The general exceptions here mean that components within the Spark cluster
can't communicate. The most common cause for this is failures of the
processors that are supposed to be communicating. I generally see this when
one of the processes goes into a GC storm or is shut down because of an
Russell i increased the rpc timeout to 240 seconds but i am still getting
this issue once a while and after this issue my spark streaming job stuck
and do not process any request then i need to restart this every time. Any
suggestion please.
Thanks
Amit
On Wed, Nov 18, 2020 at 12:05 PM Amit
Happy that saved some time for you :)
We've invested quite an effort in the latest releases into streaming and
hope there will be less and less headaches like this.
On Thu, Nov 19, 2020 at 5:55 PM Eric Beabes
wrote:
> THANK YOU SO MUCH! Will try it out & revert.
>
> On Thu, Nov 19, 2020 at 8:18
Well if the system doesn't change, then the data must be different. The
exact exception probably won't be helpful since it only tells us the last
allocation that failed. My guess is that your ingestion changed and there
is either now slightly more data than previously or it's skewed
differently.
On Fri, Nov 20, 2020, 8:25 AM Amit Sharma wrote:
> please help.
>
>
> Thanks
> Amit
>
> On Mon, Nov 9, 2020 at 4:18 PM Amit Sharma wrote:
>
>> Please find below the exact exception
>>
>> Exception in thread "streaming-job-executor-3"
>> java.lang.OutOfMemoryError: Java heap space
>> at
please help.
Thanks
Amit
On Mon, Nov 9, 2020 at 4:18 PM Amit Sharma wrote:
> Please find below the exact exception
>
> Exception in thread "streaming-job-executor-3" java.lang.OutOfMemoryError:
> Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3332)
> at
>
Please help.
Thanks
Amit
On Wed, Nov 18, 2020 at 12:05 PM Amit Sharma wrote:
> Hi, we are running a spark streaming job and sometimes it throws below
> two exceptions . I am not understanding what is the difference between
> these two exception for one timeout is 120 seconds and another is