I believe jmap is only showing you the java heap used, but the program is
running out of direct memory space. They are two different pools of memory.
I haven't had to diagnose a direct memory problem before, but this blog
post has some suggestions of how to do it:
Hi
Anybody got any pointers on this one?
Regards
Sumit Chawla
On Tue, Mar 6, 2018 at 8:58 AM, Chawla,Sumit wrote:
> No, This is the only Stack trace i get. I have tried DEBUG but didn't
> notice much of a log change.
>
> Yes, I have tried bumping
No, This is the only Stack trace i get. I have tried DEBUG but didn't
notice much of a log change.
Yes, I have tried bumping MaxDirectMemorySize to get rid of this error.
It does work if i throw 4G+ memory at it. However, I am trying to
understand this behavior so that i can setup this
Do you have a trace? i.e. what's the source of `io.netty.*` calls?
And have you tried bumping `-XX:MaxDirectMemorySize`?
On Tue, Mar 6, 2018 at 12:45 AM, Chawla,Sumit
wrote:
> Hi All
>
> I have a job which processes a large dataset. All items in the dataset
> are
Hi All
I have a job which processes a large dataset. All items in the dataset are
unrelated. To save on cluster resources, I process these items in
chunks. Since chunks are independent of each other, I start and shut down
the spark context for each chunk. This allows me to keep DAG smaller