Hi Sebastian,

Our product runs within the JVM, within a (Hadoop) YARN container. Similar 
to your situation, YARN will kill the container if it goes over the amount 
of memory reserved for the container. Java heap sizes (-Xmx) for the apps 
we run within containers vary from about 6GB to about 31GB, so this may be 
completely inappropriate if you use much smaller heaps, but here is the 
heuristic we use on Java 8. 'jvmMemory' is the -Xmx setting given to the 
JVM and adjustJvmMemoryForYarn() gives the size of the container we request.

private static int getReservedCodeCacheSize(int jvmMemory)
{
    return 100;
}

private static int getMaxMetaspaceSize(int jvmMemory)
{
    return 256;
}

private static int getCompressedClassSpaceSize(int jvmMemory)
{
    return 256;
}

private static int getExtraJvmOverhead(int jvmMemory)
{
    if (jvmMemory <= 2048)
    {
        return 1024;
    }
    else if(jvmMemory <= (1024 * 16))
    {
        return 2048;
    }
    else if(jvmMemory <= (1024 * 31))
    {
        return 5120;
    }
    else
    {
        return 8192;
    }
}

public static int adjustJvmMemoryForYarn(int jvmMemory)
{
    if (jvmMemory == 0)
    {
        return 0;
    }
    
    return jvmMemory +
           getReservedCodeCacheSize(jvmMemory) +
           getMaxMetaspaceSize(jvmMemory) +
           getCompressedClassSpaceSize(jvmMemory) +
           getExtraJvmOverhead(jvmMemory);
}



If the app uses any significant off-heap memory, we just add this to the 
container size.

Obviously, this isn't optimal, but it does prevent the "OOM killer" from 
kicking in. I'm interested to see if anyone has a better solution!

-Meg



On Thursday, August 3, 2017 at 5:17:11 AM UTC-4, Sebastian Łaskawiec wrote:
>
> Hey,
>
> Before digging into the problem, let me say that I'm very happy to meet 
> you! My name is Sebastian Łaskawiec and I've been working for Red Hat 
> focusing mostly on in memory store solutions. A while ago I attended JVM 
> performance and profiling workshop lead by Martin, which was an incredible 
> experience to me. 
>
> Over the last a couple of days I've been working on tuning and sizing our 
> app for Docker Containers. I'm especially interested in running JVM without 
> swap and constraining memory. Once you hit the memory limit, the OOM Killer 
> kicks and takes your application down. Rafael wrote pretty good pragmatic 
> description here [1].
>
> I'm currently looking for some good practices for measuring and tuning JVM 
> memory size. I'm currently using:
>
>    - The JVM native memory tracker [2]
>    - pmap -x, which gives me RSS
>    - jstat -gccause, which gives me an idea how GC is behaving
>    - dstat which is not CGroups aware but gives me an overall idea about 
>    paging, CPU and memory
>
> Here's an example of a log that I'm analyzing [3]. Currently I'm trying to 
> adjust Xmx and Xms correctly so that my application fills the constrained 
> container but doesn't spill out (which would result in OOM Kill done by the 
> kernel). The biggest problem that I have is how to measure the remaining 
> amount of memory inside the container? Also I'm not sure why the amount of 
> committed JVM memory is different from RSS reported by pmap -x? Could you 
> please give me a hand with this?
>
> Thanks,
> Sebastian
>
> [1] https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
> [2] 
> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html
> [3] https://gist.github.com/slaskawi/a6ddb32e1396384d805528884f25ce4b
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to