On Mon, Jan 8, 2018 at 10:16 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Suvendu,
>
> On 1/5/18 6:46 AM, Suvendu Sekhar Mondal wrote:
> > I really never found any explanation behind this "initial=max" heap
> > size theory until I saw your mail; although I see this type of
> > configuration in most of the places. It will be awesome if you can
> > tell more about benefits of this configuration.
>
> It's really just about saving the time it takes to resize the heap.
> Because the JVM will never shrink the heap (at least not in any JVMs
> I'm familiar with), a long-running server-side process will (likely)
> eventually use all of the heap you allow it to use. Basically, memory
> exists to be used, so why not use all of it immediately?
>
> > I usually do not set initial and max heap size to same value
> > because garbage collection is delayed until the heap is full.
>
> The heap is divided into sections. The first heap to be GC'd after JVM
> launch is likely to be the eden space which is relatively small, and
> few objects will survive the GC operation (lots of temporary String
> objects, etc. will die without being tenured). The only spaces that
> take a "long" time to clean are the tenured generation and the (until
> recently named/replaced) permanent generation (which isn't actually
> permanent). Cleaning those spaces is long, but a GC to clean those
> spaces should not happen for a long time after the JVM starts.
>
> Also, most collector algorithms/strategies have two separate types of
> operation: a short/minor GC and a long/full GC. As long as short/minor
> GC operations take place regularly, you should not experience
> application-pauses while the heap is reorganized.
>
> Finally, application pauses are likely to be long if the entire heap
> must be re-sized because then *everything* must be re-located.
>
> > Therefore, the first time that the GC runs, the process can take
> > longer. Also, the heap is more likely to be fragmented and require
> > a heap compaction. To avoid that, till now my strategy is to: -
> > Start application with the minimum heap size that application
> > requires - When the GC starts up, it runs frequently and
> > efficiently because the heap is small
>
> I think this is a reasonable expectation for someone who doesn't
> understand the Black Art of Garbage Collection, but I'm not sure it's
> actually true. I'm not claiming that I know any better than you do,
> but I suspect that the collector takes its parameters very seriously,
> and when you introduce artificial constraints (such as a smaller
> minimum heap size), the GC will attempt to respect those constraints.
> The reality is that those constraints are completely unnecessary; you
> have only imposed them because you think you know better than the GC
> algorithm.
>
> > - When the heap is full of live objects, the GC compacts the heap.
> > If sufficient garbage is still not recovered or any of the other
> > conditions for heap expansion are met, the GC expands the heap.
> >
> > Another thing, what if I know the server load varies a lot(from 10s
> > in night time to 10000s during day time) during different time
> > frame, does "initial=max heap" apply for that situation also?
>
> My position is that initial==heap is always the right recipe for a
> server-side JVM, regardless of the load profile. Setting initial < max
> may even cause an OOM at the OS level in the future if the memory is
> over-committed (or, rather, WILL BE over-committed if/when the heap
> must expand).
>
>
>
To add some 2 cents to what christoph said (which was a very correct
explanation already), the only valid exception to the initial=heap rule in
my eyes, is when you actually not sure how much memory your process will
need. And if you have a bunch of microservices on one machine, you may want
not to spend all the memory without need.
So start a little bit lower but give room for expansion in case the process
need it.
For example I have a VM with 13 'small' JMVs on it. The difference between
ms and mx would be about 5 GB. In this specific case I suppose it is ok, to
provide different values at least for some time, and adjust later.

However, reading gc logs or using tools like jclarity can help you find the
proper pool size for your collector/jvm version/application better. Unless
you release and change your memory usage pattern every week or so, in this
case using xms!=xmx seems ok to me, as a safety net.

regards
Leon

Reply via email to