Xiao Feng,
 I will read the reference to understand what are the compressor
advantages, and how the algorithm is implemented, thanks.

Even when you have 1GB of physical memory, is there not an overhead of page
faults?
Is it an option to compact the heap in parts and/or to increase the number
of passes to reduce the space overhead?
Is this significantly better than doing semi-space copying at each GC cycle,
since one major advantage of compaction( other than preserving allocation
order ) over copying, was probably less space overhead?
Are we looking for a parallel compaction algorithm for all situations, or
can we think of choosing at JVM startup based on user input, client/server,
or OS feedback on execution environment?

Sorry for all these questions before reading the book :-)

Rana






On 10/27/06, Xiao-Feng Li <[EMAIL PROTECTED]> wrote:
>
> Hi, all, the plan for GCv5 parallel compaction is to apply the idea of
> Compressor [1]. But it has an issue I want to discuss with you.
> Compressor needs to reserve an unmapped virtual space for compaction.
> The size of the reserved part is the same as that of copy reserve
> space in a semi-space collector. This means about that part of the
> virtual space is unusable for the JVM. In a typical setting, the
> wasted part is half size of the total compaction space. If we have 1GB
> physical memory, the JVM is ok for Compressor because the virtual
> space is large enough to wast half; but if the phsical memory is >2GB,
> Compressor may have a problem in 32bit machine: some of phsical mapped
> space might be wasted.
>
> Any opinion on this?
>
> Thanks,
> xiaofeng
>
> [1] http://www.cs.technion.ac.il/~erez/Papers/compressor-pldi.pdf


Reply via email to