Sorry I realized that I left a bit of the last email.
This is the only BLOCKED thread in the dump. Refence handler is blocked
most likely due to the GC running at the moment of the dump.
"Reference Handler" daemon prio=10 tid=2 BLOCKED
at java.lang.Object.wait(Native Method)
at
It does not look like. Here is the output of "grep -A2 -i waiting
spark_tdump.log"
"RMI TCP Connection(idle)" daemon prio=5 tid=156 TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
--
"task-result-getter-1" daemon
Thank you for your response
Unfortunately I cannot share a thread dump. What are you looking for
exactly?
Here is the list of the 50 biggest objects (retained size order,
descendent):
java.util.concurrent.ArrayBlockingQueue#
java.lang.Object[]#
I guess it may be some dead-lock in BlockGenerator. Could you check it by
yourself?
On Thu, Feb 4, 2016 at 4:14 PM, Udo Fholl wrote:
> Thank you for your response
>
> Unfortunately I cannot share a thread dump. What are you looking for
> exactly?
>
> Here is the list of
Hey Udo,
mapWithState usually uses much more memory than updateStateByKey since it
caches the states in memory.
However, from your description, looks BlockGenerator cannot push data into
BlockManager, there may be something wrong in BlockGenerator. Could you
share the top 50 objects in the heap
Hi all,
I recently migrated from 'updateStateByKey' to 'mapWithState' and now I see
a huge increase of memory. Most of it is a massive "BlockGenerator" (which
points to a massive "ArrayBlockingQueue" that in turns point to a huge
"Object[]").
I'm pretty sure it has to do with my code, but I