Hi Ioi,

Just a few minor comments. No need to see another webrev.
compactHashtable.cpp:

71   _num_entries ++;

extra space before ++
similarly in lines 80, 112, 123, 125

118         Entry tent = bucket->at(i);

It is clearer if 'tent' is just 'ent' since the code in this block is not dealing with tiny entry.

249       while (entry <entry_max) {

Needs a space before entry_max.


SASymbolTableTest.java:

The result of p.getPid() can be saved into a local variable so that it won't need to be done twice in lines 86 and 93.

thanks,
Calvin

On 4/12/16, 12:40 PM, Jiangli Zhou wrote:
Ioi,

On Apr 11, 2016, at 7:11 PM, Ioi Lam<ioi....@oracle.com>  wrote:

Hi Jiangli,


Thanks for the review:

On 4/11/16 6:44 PM, Jiangli Zhou wrote:
Hi Ioi,

I like the more structural way of reading/writing the compact table with 
SimpleCompactHashtable. It looks quite clean overall.

- How about using VALUE_ONLY_BUCKET_TYPE, which is more descriptive than 
TINY_BUCKET_TYPE?

OK, I will make the change.

- The following assert in CompactSymbolTableWriter::add() limits the total 
shared space size to be less than MAX_SHARED_DELTA unnecessarily. Symbols are 
allocated from the RO space at dump time. We only need to make sure the max 
delta between the ‘base_address’ and the end of RO space is less than 
MAX_SHARED_DELTA. This is not a new issue introduced by your change, you don’t 
have to address that as part of this change if you prefer. I’ll file a separate 
RFE.

I think it's better to do this in a separate RFE since MAX_SHARED_DELTA is used 
elsewhere as well.
I filed JDK-8154108<https://bugs.openjdk.java.net/browse/JDK-8154108>.

  171 void CompactSymbolTableWriter::add(unsigned int hash, Symbol *symbol) {
  172   address base_address = address(MetaspaceShared::shared_rs()->base());
  173   uintx max_delta = uintx(MetaspaceShared::shared_rs()->size());
  174   assert(max_delta<= MAX_SHARED_DELTA, "range check");

- Why is the default RO space size increased?

That's because the symbol table is moved from RW to RO, so I had to change the 
RO minimum size, or else test/runtime/SharedArchiveFile/LimitSharedSizes.java 
would fail.
Ok.

Thanks,
Jiangli

- src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/memory/SymbolTable.java
- 
src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/CompactHashTable.java
   Copyright year.
Will fix.

Thanks
- Ioi
Thanks,
Jiangli

On Mar 31, 2016, at 10:43 PM, Ioi 
Lam<ioi....@oracle.com<mailto:ioi....@oracle.com>>  wrote:

Please review

http://cr.openjdk.java.net/~iklam/jdk9/8150607_cleanup_compact_hashtable.v01/<http://cr.openjdk.java.net/%7Eiklam/jdk9/8150607_cleanup_compact_hashtable.v01/>

Bug: Clean up CompactHashtable

    
https://bugs.openjdk.java.net/browse/JDK-8150607<https://bugs.openjdk.java.net/browse/JDK-8150607>

Summary of fix:

[1] Instead of reading/writing the table bit-by-bit, which is tedious and
    error prone, use SimpleCompactHashtable::serialize(), which is more
    structural.

[2] Split up the _buckets and _entries into two separate arrays, so the
    dumping and walking code is easier to understand

    (see comments above SimpleCompactHashtable declaration)
    These 2 arrays are now allocated from the RO region (used to be in RW)

[3] Renamed a few things

    COMPACT_BUCKET_TYPE ->  TINY_BUCKET_TYPE
        (having something called "compact" in CompactHashtable is confusing)

    The old names "dump_table" (actually dumping the buckets) and
    "dump_buckets" (actually dumping the entries) were conflicting with
    terminology used elsewhere. Now the terminology is unified:
        "buckets" = the main index, "entries" = the second level.

    lookup_entry ->  decode_entry (it wasn't doing any lookup)

[4] Changed all "*p++" to "array->at_put", so out-of-bounds can be
    checked with assert.

[5] Replaced all juint to u4 (suggested by Coleen)

[6] templatize the iterator (see CompactHashtable::symbols_do ->
    SimpleCompactHashtable::iterate)

[7] I also added a test case using Serviceability Agent -- due to the lack of
    a regression test, the walking of the compact hashtable in SA had been
    broken for a while before it was fixed in JDK-8151368, so having a test
    case would make the code more maintainable.

Tests:

    Hotspot JTREG tests

Thanks
- Ioi

Reply via email to