> Usually, fragmentation is dealt with using a mark-compact collector (or
> IBM has used a mark-sweep-compact collector).
> Copying collectors are not only super efficient at collecting young
> spaces, but they are also great for fragmentation - when you copy
> everything to the new space, you can remove any fragmentation. At the
> cost of double the space requirements though.


So that if memory size is optimized (application specific!) no any object
copy will ever happen, although it is server-loading specific too
(application-usage-specific; what do they do most frequently?)
- just statistics, need to monitor JVM and make decision.

Few years ago I had hard time explaining to client that byte array should be
Base64 encoded instead of just <byte>123</byte>... instead of GC tuning...

SOLR uses XML; try to upload big XML - each Element instance needs at least
100 bytes... try to create array of 20M of Elements (parser will do!)... so
that any GC tuning is application-usage specific too... RAM allocation and
GC tuning is "usage"-specific, not SOLR-specific...


Reply via email to