Re: RFR: 8276927: [PPC64] Port shenandoahgc to linux on ppc64le

2021-11-11 Thread Roman Kennke
On Thu, 11 Nov 2021 14:30:05 GMT, Martin Doerr  wrote:

>> src/hotspot/cpu/ppc/gc/shenandoah/c1/shenandoahBarrierSetC1_ppc.cpp line 83:
>> 
>>> 81: LIRGenerator* gen = access.gen();
>>> 82: 
>>> 83: if (ShenandoahCASBarrier) {
>> 
>> I am not sure, but I almost think we should not even end up in the method 
>> with -ShenandoahCASBarrier. If anything, -ShenandoahCASBarrier should result 
>> in only calling super to emit regular CAS without any barriers.
>
> We hit this case when running `jdk/bin/java -XX:+UseShenandoahGC 
> -XX:ShenandoahGCMode=passive -version`. x86 and aarch64 check for 
> ShenandoahCASBarrier, too. So, looks like these checks are needed and correct.

Ok then.

-

PR: https://git.openjdk.java.net/jdk/pull/6325


Re: RFR: 8276927: [PPC64] Port shenandoahgc to linux on ppc64le

2021-11-11 Thread Roman Kennke
On Wed, 10 Nov 2021 09:00:04 GMT, Niklas Radomski  wrote:

> Port the Shenandoah garbage collector 
> (JDK-8241457)[https://bugs.openjdk.java.net/browse/JDK-8241457] to linux on 
> ppc64le.

Hi Niklas,
thanks for this awesome work!
I can't really comment on the actual PPC code, so this needs to be reviewed by 
somebody else. Structurally the change looks correct. I have one comment about 
the C1 CAS barrier code, but it's minor.

Thanks & cheers,
Roman

src/hotspot/cpu/ppc/gc/shenandoah/c1/shenandoahBarrierSetC1_ppc.cpp line 83:

> 81: LIRGenerator* gen = access.gen();
> 82: 
> 83: if (ShenandoahCASBarrier) {

I am not sure, but I almost think we should not even end up in the method with 
-ShenandoahCASBarrier. If anything, -ShenandoahCASBarrier should result in only 
calling super to emit regular CAS without any barriers.

-

Marked as reviewed by rkennke (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/6325


Re: RFR: 8260408: Shenandoah: adjust inline hints after JDK-8255019

2021-01-26 Thread Roman Kennke
On Tue, 26 Jan 2021 11:41:52 GMT, Aleksey Shipilev  wrote:

> I was following up on performance testing results for some ongoing work, and 
> realized that there are `ShenandoahMarkContext::mark_strong` calls from 
> `mark_work_loop`. Those callees were supposed to be inlined. I believe it is 
> a simple omission after JDK-8255019 moved `mark_work_loop` code to 
> `shenandoahMark.cpp`.
> 
> On a typical workload:
> 
> # Before
> [44.500s][info][gc,stats] Concurrent Marking  =0.395 s (a =11278 us) 
>(n =35) (lvls, us = 6426, 9473,11133,12891,16476)
> 
> # After
> [44.405s][info][gc,stats] Concurrent Marking  =0.337 s (a = 9636 us) 
>(n =35) (lvls, us = 3770, 7383, 9785,11328,16776)
> 
> Additional testing:
>  - [x] Eyeballing GC hot paths
>  - [x] Ad-hoc performance tests

Looks good! Thanks!

-

Marked as reviewed by rkennke (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/2235


Re: RFR: 8256497: Zero: enable G1 and Shenandoah GCs

2020-11-18 Thread Roman Kennke
On Tue, 17 Nov 2020 19:02:03 GMT, Aleksey Shipilev  wrote:

> Following the [JDK-8255796](https://bugs.openjdk.java.net/browse/JDK-8255796) 
> improvement that ditched the inline contiguous alloc use from Zero, we can 
> now rely on GC interface to hook the GCs properly. G1 and Shenandoah are a 
> bit special here, because they require special `Reference.get` handling.
> 
> Note that it does not change the default GC for Zero, because Zero is 
> implicitly `NeverActAsServerMachine`, which still selects Serial GC by 
> default. After this change, Zero users can opt-in to G1 or Shenandoah.
> 
> Additional testing:
>  - [x] Linux x86_64 Zero fastdebug `hotspot_gc_shenandoah` (some lingering 
> failures about non-enabled compressed oops)
>  - [ ] Linux x86_64 Zero fastdebug `tier1` with `-XX:+UseG1GC`

Looks good to me! Thanks!

-

Marked as reviewed by rkennke (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/1268


Re: RFR: 8256499: Zero: enable Epsilon GC

2020-11-18 Thread Roman Kennke
On Tue, 17 Nov 2020 19:11:45 GMT, Aleksey Shipilev  wrote:

> Following the [JDK-8255796](https://bugs.openjdk.java.net/browse/JDK-8255796) 
> improvement that ditched the inline contiguous alloc use from Zero, we can 
> now rely on GC interface to hook the GCs properly. Epsilon GC does not 
> require anything else. Other GCs require a bit of fiddling and more testing 
> (see e.g. [JDK-8256497](https://bugs.openjdk.java.net/browse/JDK-8256497)).
> 
> Additional testing:
>  - [x] Linux x86_64 Zero {fastdebug,release}, `gc/epsilon` tests

Great! Looks good!

-

Marked as reviewed by rkennke (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/1269


Re: RFR: 8252114: Windows-AArch64: Enable and test ZGC and ShenandoahGC [v2]

2020-09-15 Thread Roman Kennke
On Thu, 10 Sep 2020 15:30:04 GMT, Monica Beckwith  wrote:

>> ZGC and ShenandoahGC are two low latency GCs in OpenJDK HotSpot. We will 
>> enable and run microbenchmarks and scaling
>> tests for both. After enabling, I ran JTRegs, a few micros, and SPECJBB2015 
>> (heap and multi-JVM) scaling tests for both
>> the GCs.
>
> Monica Beckwith has updated the pull request incrementally with one 
> additional commit since the last revision:
> 
>   x$OPENJDK_TARGET_CPU" = "xaarch64"
>   
>   Incorporating Aleksey's comments to incorporate 'x'

Looks good to me. Thanks!

-

Marked as reviewed by rkennke (Reviewer).

PR: https://git.openjdk.java.net/jdk/pull/97


Re: RFR (16): JFR: Separate metadata per JVM-feature

2020-07-24 Thread Roman Kennke
Hi Markus,

Thank you for your insights. This is fine by me. As I already said in
reply to Erik Gahlin, I thought I'd offer this, but have no strong
opinion either way. I'll withdraw the RFR and Jire issue then.

Cheers,
Roman


> Hi Roman,
> 
> Thanks for suggesting this, I understand what you are trying to
> prove, but I think the cost associated might not be warranted. The
> reason is that when we made JFR open source (JDK 11), we put a lot of
> engineering efforts into consolidation work for the JFR metadata to
> have it be described in a single place, metadata.xml. In the old
> system(s), there used to be many .xml (and .xslt) files scattered
> around, and it was quite painful to understand, manage and process
> them uniformly.
> 
> Unfortunately, this suggestion is a reminder of the old world, which
> have non-favorable connotations, so I would prefer if we can avoid
> doing this.
> 
> Thanks
> Markus
> 
> -Original Message-
> From: Roman Kennke  
> Sent: den 23 juli 2020 22:29
> To: Erik Gahlin 
> Cc: hotspot-jfr-...@openjdk.java.net; build-dev <
> build-dev@openjdk.java.net>
> Subject: Re: RFR (16): JFR: Separate metadata per JVM-feature
> 
> Hi Erik,
> 
> 
> > Hi Roman,
> > 
> > What is the problem you are trying to solve? 
> > 
> 
> It is mostly an exercise in cleanliness. In the effort to upstream
> Shenandoah GC to jdk11u (which is still ongoing), it has been asked
> to make sure that build with Shenandoah excluded (--with-jvm-
> features=-
> shenandoahgc) is identical to current build without Shenandoah patch.
> 
> The symbols and empty method(s) compiled in by JFR have been one of a
> few places that needed some work. With this patch and a few more, I
> was able to *prove mechanically* that the objects compiled by
> unpatched JDK11u are byte-identical to patched JDK11u with Shenandoah
> disabled.
> 
> I thought I'd offer this here, maybe it's equally interesting going
> forward. If it's agreed that it is not very interesting then be it -
> I don't have a strong opinion about it.
> 
> Thanks & cheers,
> Roman
> 
> > Having metadata per JVM-features adds complexity with little added 
> > benefit from what I can see.
> > Thanks
> > Erik
> > 
> > > On 23 Jul 2020, at 19:48, Roman Kennke 
> > > wrote:
> > > 
> > > This is a fall-out from my Shenandoah upstreaming work in JDK11, 
> > > where I made a similar change in order to separate-out
> > > Shenandoah 
> > > parts from JFR build when Shenandoah is disabled.
> > > 
> > > Currently, all JFR metadata is collected in a single
> > > metadata.xml 
> > > file.
> > > From those, the build machinery generates headers and some other 
> > > things from it. For JVM-feature-specific metadata, this leads to 
> > > stuff generated that doesn't exist when the feature is excluded
> > > from 
> > > the build. For example, when disabling Shenandoah, we still need
> > > to 
> > > compile empty methods in jfrPeriodic.cpp to satisfy the
> > > generated 
> > > code.
> > > 
> > > The fix is to extend the code-generator to accept multiple
> > > metadata-
> > > *.xml files and concatenate the parsing. The version in JDK16 
> > > accepts a --xml $FILENAME argument, and I'm extending this to --
> > > xml 
> > > $FILENAME1:$FILENAME2:.. etc, i.e. multiple filenames separated
> > > by 
> > > colon. It allows empty filenames, e.g. metadata.xml::: would be 
> > > parsed as a single file metadata.xml files. That makes the build 
> > > machinery much simpler.
> > > 
> > > I also did the relevant metadata-separation for Shenandoah
> > > because 
> > > that is what I care about myself. If you'd like other parts
> > > treated 
> > > the same, just let me know.
> > > 
> > > Bug:
> > > https://bugs.openjdk.java.net/browse/JDK-8250218
> > > Webrev:
> > > http://cr.openjdk.java.net/~rkennke/JDK-8250218/webrev.00/
> > > 
> > > Testing: build with and without Shenandoah, run some tests, I'd 
> > > await submit-repo results before pushing.
> > > 
> > > Can I please get a review?
> > > 
> > > Thanks!
> > > Roman
> > > 



Re: RFR (16): JFR: Separate metadata per JVM-feature

2020-07-24 Thread Roman Kennke
Thanks, Erik. I can do those changes. But can we first agree with Erik
Gahlin if we want this change at all?

Thanks,
Roman

On Thu, 2020-07-23 at 17:23 -0700, Erik Joelsson wrote:
> Build changes look ok to me. I would prefer if the long "COMMAND"
> line 
> was broken up though as the side by side diff is already forcing a
> side 
> scroll on my smaller monitor.
> 
> /Erik
> 
> On 2020-07-23 10:48, Roman Kennke wrote:
> > This is a fall-out from my Shenandoah upstreaming work in JDK11,
> > where
> > I made a similar change in order to separate-out Shenandoah parts
> > from
> > JFR build when Shenandoah is disabled.
> > 
> > Currently, all JFR metadata is collected in a single metadata.xml
> > file.
> >  From those, the build machinery generates headers and some other
> > things
> > from it. For JVM-feature-specific metadata, this leads to stuff
> > generated that doesn't exist when the feature is excluded from the
> > build. For example, when disabling Shenandoah, we still need to
> > compile
> > empty methods in jfrPeriodic.cpp to satisfy the generated code.
> > 
> > The fix is to extend the code-generator to accept multiple
> > metadata-
> > *.xml files and concatenate the parsing. The version in JDK16
> > accepts a
> > --xml $FILENAME argument, and I'm extending this to --xml
> > $FILENAME1:$FILENAME2:.. etc, i.e. multiple filenames separated by
> > colon. It allows empty filenames, e.g. metadata.xml::: would be
> > parsed
> > as a single file metadata.xml files. That makes the build machinery
> > much simpler.
> > 
> > I also did the relevant metadata-separation for Shenandoah because
> > that
> > is what I care about myself. If you'd like other parts treated the
> > same, just let me know.
> > 
> > Bug:
> > https://bugs.openjdk.java.net/browse/JDK-8250218
> > Webrev:
> > http://cr.openjdk.java.net/~rkennke/JDK-8250218/webrev.00/
> > 
> > Testing: build with and without Shenandoah, run some tests, I'd
> > await
> > submit-repo results before pushing.
> > 
> > Can I please get a review?
> > 
> > Thanks!
> > Roman
> > 



Re: RFR (16): JFR: Separate metadata per JVM-feature

2020-07-23 Thread Roman Kennke
Hi Erik,


> Hi Roman,
> 
> What is the problem you are trying to solve? 
> 

It is mostly an exercise in cleanliness. In the effort to upstream
Shenandoah GC to jdk11u (which is still ongoing), it has been asked to
make sure that build with Shenandoah excluded (--with-jvm-features=-
shenandoahgc) is identical to current build without Shenandoah patch.

The symbols and empty method(s) compiled in by JFR have been one of a
few places that needed some work. With this patch and a few more, I was
able to *prove mechanically* that the objects compiled by unpatched
JDK11u are byte-identical to patched JDK11u with Shenandoah disabled.

I thought I'd offer this here, maybe it's equally interesting going
forward. If it's agreed that it is not very interesting then be it - I
don't have a strong opinion about it.

Thanks & cheers,
Roman

> Having metadata per JVM-features adds complexity with little added
> benefit from what I can see. 


> Thanks
> Erik
> 
> > On 23 Jul 2020, at 19:48, Roman Kennke  wrote:
> > 
> > This is a fall-out from my Shenandoah upstreaming work in JDK11,
> > where
> > I made a similar change in order to separate-out Shenandoah parts
> > from
> > JFR build when Shenandoah is disabled.
> > 
> > Currently, all JFR metadata is collected in a single metadata.xml
> > file.
> > From those, the build machinery generates headers and some other
> > things
> > from it. For JVM-feature-specific metadata, this leads to stuff
> > generated that doesn't exist when the feature is excluded from the
> > build. For example, when disabling Shenandoah, we still need to
> > compile
> > empty methods in jfrPeriodic.cpp to satisfy the generated code. 
> > 
> > The fix is to extend the code-generator to accept multiple
> > metadata-
> > *.xml files and concatenate the parsing. The version in JDK16
> > accepts a
> > --xml $FILENAME argument, and I'm extending this to --xml
> > $FILENAME1:$FILENAME2:.. etc, i.e. multiple filenames separated by
> > colon. It allows empty filenames, e.g. metadata.xml::: would be
> > parsed
> > as a single file metadata.xml files. That makes the build machinery
> > much simpler.
> > 
> > I also did the relevant metadata-separation for Shenandoah because
> > that
> > is what I care about myself. If you'd like other parts treated the
> > same, just let me know.
> > 
> > Bug:
> > https://bugs.openjdk.java.net/browse/JDK-8250218
> > Webrev:
> > http://cr.openjdk.java.net/~rkennke/JDK-8250218/webrev.00/
> > 
> > Testing: build with and without Shenandoah, run some tests, I'd
> > await
> > submit-repo results before pushing.
> > 
> > Can I please get a review?
> > 
> > Thanks!
> > Roman
> > 



RFR (16): JFR: Separate metadata per JVM-feature

2020-07-23 Thread Roman Kennke
This is a fall-out from my Shenandoah upstreaming work in JDK11, where
I made a similar change in order to separate-out Shenandoah parts from
JFR build when Shenandoah is disabled.

Currently, all JFR metadata is collected in a single metadata.xml file.
>From those, the build machinery generates headers and some other things
from it. For JVM-feature-specific metadata, this leads to stuff
generated that doesn't exist when the feature is excluded from the
build. For example, when disabling Shenandoah, we still need to compile
empty methods in jfrPeriodic.cpp to satisfy the generated code. 

The fix is to extend the code-generator to accept multiple metadata-
*.xml files and concatenate the parsing. The version in JDK16 accepts a
--xml $FILENAME argument, and I'm extending this to --xml
$FILENAME1:$FILENAME2:.. etc, i.e. multiple filenames separated by
colon. It allows empty filenames, e.g. metadata.xml::: would be parsed
as a single file metadata.xml files. That makes the build machinery
much simpler.

I also did the relevant metadata-separation for Shenandoah because that
is what I care about myself. If you'd like other parts treated the
same, just let me know.

Bug:
https://bugs.openjdk.java.net/browse/JDK-8250218
Webrev:
http://cr.openjdk.java.net/~rkennke/JDK-8250218/webrev.00/

Testing: build with and without Shenandoah, run some tests, I'd await
submit-repo results before pushing.

Can I please get a review?

Thanks!
Roman



Re: RFR (M) 8225048: Shenandoah x86_32 support

2019-05-30 Thread Roman Kennke
The changes look good to me. Thanks!!

Roman

> http://cr.openjdk.java.net/~shade/8225048/webrev.01/
> 
> Some history: Shenandoah used to support x86_32 in "passive" mode long time 
> ago. This mode relies
> only on stop-the-world GC to avoid implementing barriers (basically, runs 
> Degenerated GC all the
> time). It was an interesting mode to see the footprint numbers you can get 
> with uncommits and
> slimmer native pointers with really small microservice-size VMs. This mode 
> was dropped before
> integration upstream, because many Shenandoah tests expect all 
> heuristics/modes to work properly,
> and having the rudimentary x86_32 support was breaking tier1 tests. So we 
> disabled it.
> 
> Today, we have significantly simplified runtime interface thanks to LRB and 
> elimination of separate
> forwarding pointer slot, and we can build the fully concurrent x86_32 with 
> ease. This allows us to
> maintain 32-bit cleanness in Shenandoah code, plus serves as the proof of 
> concept that Shenandoah
> can be implemented on 32-bit platform.
> 
> I am planning to backport this all the way to 8u, once other improvements are 
> backported, so keeping
> the patch simple is paramount.
> 
> A brief tour of changes:
> 
>  *) One minor change in build config to accept both x86_32 and x86_64 
> (therefore, cc'ing build-dev);
> this is the same check we had before reversal in jdk12;
> 
>  *) C1 changes need to disambiguate for single/double-cpu slots in CAS, this 
> is the same other
> BarrierSets do;
> 
>  *) C2 changes need shenandoah_x86_32.ad to carry adapters for CAS barriers, 
> plus accept
> StoreIConditional for raw ptr stores on some paths.
> 
>  *) SBSAssembler change reshuffles _LP64 checks to enable x86_32 paths. It 
> needs to deal with
> UseCompressedOops blocks, getting threads into a separate register, etc.;
> 
>  *) Test changes enable running them on x86_32, and generally 32-bit 
> platforms -- I can split them
> out, but they make little sense on their own, without having the product code 
> that actually uses them;
> 
> Builds: {x86_32, x86_64} with/without PCH; other platform builds in CI
> Testing: hotspot_gc_shenandoah, CTW tests, jcstress, ad-hoc footprint 
> experiments
> 
> IIRC, Oracle does not build either x86_32 or Shenandoah, so I did not run it 
> through jdk-submit.
> 



Re: RFR: JDK-8221766: Load-reference barriers for Shenandoah

2019-04-08 Thread Roman Kennke
On 4/7/19 7:18 PM, Roman Kennke wrote:
> > On 4/2/19 10:12 PM, Roman Kennke wrote:
> > > > - No more need for object equals barriers.
> > > 
> > > I'm pleased about that. I really hated the AArch64 Shenandoah
> > > CAS!
> > 
> > I'm sorry to disappoint you, but the CAS barrier is still needed.
> > The
> > memory location may still legally hold a from-space reference, and
> > comparing that to a to-space reference needs some special care. And
> > yes, ZGC has a similar problem as far as I know.
> 
> That's interesting. Could we not simply promote the reference in the
> reference field we're CASing, then do a normal CAS?

Yes. But it requires two CASes in a row. Don't know if that is better
than our loop, which is rarely even entered (normally the CAS-construct 
goes via fast-path with a single CAS).

Roman



Re: RFR: JDK-8221766: Load-reference barriers for Shenandoah

2019-04-07 Thread Roman Kennke
On 4/2/19 10:12 PM, Roman Kennke wrote:
> > - No more need for object equals barriers.
> 
> I'm pleased about that. I really hated the AArch64 Shenandoah CAS!

I'm sorry to disappoint you, but the CAS barrier is still needed. The
memory location may still legally hold a from-space reference, and
comparing that to a to-space reference needs some special care. And
yes, ZGC has a similar problem as far as I know.

Roman




Re: RFR: JDK-8221766: Load-reference barriers for Shenandoah

2019-04-04 Thread Roman Kennke

The main difference is that instead of ensuring correct invariant when
we store anything into the heap (e.g. read-barrier before reads,
write-barrier before writes, plus a bunch of other stuff), we ensure the
strong invariance on objects when they get loaded, by employing what is
currently our write-barrier.


OK, so how does this work? Sure, the OOP load promotes an object to
tospace, but how do you ensure that the OOP doesn't become stale when
a later phase occurs?


Whenever we start an evacuation phase, we pre-evacuate everything that's 
referenced by stack or registers, and update those stack slots and 
registers. (We did that with the old barrier-scheme too.)


Roman


Re: RFR: JDK-8221766: Load-reference barriers for Shenandoah

2019-04-03 Thread Roman Kennke

I don't think it should be part of this cleanup.


Fair enough.
I have run several tests today, and removing the is_Phi() call doesn't 
seem to negatively impact Shenandoah.


Updated webrevs:
Incremental:
http://cr.openjdk.java.net/~rkennke/JDK-8221766/webrev.01.diff/
Full:
http://cr.openjdk.java.net/~rkennke/JDK-8221766/webrev.01/

Ok now?

Thanks,
Roman


Please, file separate RFE to push this change with separate review and 
testing.


Thanks,
Vladimir

On 4/3/19 4:18 AM, Roland Westrelin wrote:


Hi Vladimir,


opto/loopnode.cpp new is_Phi check was added. Please, explain.


When we expand barriers, if we find a null check nearby we move the
barrier close to the null check so there's a better chance of converting
it to an implicit null check. That happens as part of a pass of loop
opts. I think that's where that change comes from but I don't remember
the details. In general we need the control that's assigned to a load to
not be too conservative.

Anyway, that change is not required for correctness. But it looks
reasonable to me.

Roland.



Re: RFR: JDK-8221766: Load-reference barriers for Shenandoah

2019-04-02 Thread Roman Kennke

Hi Vladimir,


This is nice cleanup :)

4294 lines changed: 977 ins; 2841 del; 476 mod


Yeah, right? :-)

First is general question. I don't understand why you need (diagnostic) 
ShenandoahLoadRefBarrier flag if it is new behavior and you can't use 
old one because you removed it. I am definitely missing something here.


This is added for the same purpose that we had e.g. 
+/-ShenandoahWriteBarrier before: in order to selectively disable the 
barrier generation, for testing and diagnostics.



Thank you for thinking about Graal:

 >    ==> good for upcoming Graal (sup)port


:-)


opto/loopnode.cpp new is_Phi check was added. Please, explain.


I'm not sure. I believe Roland did this. I'll let him comment on it.


I don't see other issues in C2 code.


:-)

Thanks,
Roman



Regards,
Vladimir

On 4/2/19 2:12 PM, Roman Kennke wrote:

(I am cross-posting this to build-dev and compiler-dev because this
contains some (trivial-ish) shared build and C2 changes. The C2 changes
are almost all reversals of Shenandoah-specific paths that have been
introduced in initial Shenandoah push.)

I would like to propose that we switch to what we came to call 'load
reference barrier' as new barrier scheme for Shenandoah GC.

The main difference is that instead of ensuring correct invariant when
we store anything into the heap (e.g. read-barrier before reads,
write-barrier before writes, plus a bunch of other stuff), we ensure the
strong invariance on objects when they get loaded, by employing what is
currently our write-barrier.

The reason why I'm proposing it is:
- simpler barrier interface
- easier to get good performance out of it
   ==> good for upcoming Graal (sup)port
- reduced maintenance burden (I intend to backport it all the way)

This has a number of advantages:
- Strong invariant means it's a lot easier to reason about the state of
GC and objects
- Much simpler barrier interface. Infact, a lot of stuff that we added
to barrier interfaces after JDK11 will now become unused: no need for
barriers on primitives, no need for object equality barriers, no need
for resolve barriers, etc. Also, some C2 stuff that we added for
Shenandoah can now be removed again. (Those are what comprise most
shared C2 changes.)
- Optimization is much easier: we currently put barriers 'down low'
close to their uses (which might be inside a hot loop), and then work
hard to optimize barriers upwards, e.g. out of loops. By using
load-ref-barriers, we would place them at the outermost site already.
Look how much code is removed from shenandoahSupport.cpp!
- No more need for object equals barriers.
- No more need for 'resolve' barriers.
- All barriers are now conditional, which opens up opportunity for
further optimization later on.
- we can re-enable the fast JNI getfield stuff
- we no longer need the nmethod initializer that initializes embedded
oops to to-space
- We no longer have the problem to use two registers for 'the same'
value (pre- and post-barrier).

The 'only' optimizations that we do in C2 are:
- Look upwards and see if barrier input indicates we don't actually need
the barrier. Would be the case for: constants, nulls, method parameters,
etc (anything that is not like a load). Even though we insert barriers
after loads, you'd be surprised to see how many loads actually disappear.
- Look downwards to check uses of the barrier. If it doesn't feed into
anything that requires a barrier, we can remove it.

Performance doesn't seem to be negatively impacted at all. Some
benchmarks benefit positively from it.

Testing: Testing: hotspot_gc_shenandoah, SPECjvm2008, SPECjbb2015, all
of them many times. This patch has baked in shenandoah/jdk for 1.5
months, undergone our rigorous CI, received various bug-fixes, we have
had a close look at the generated code to verify it is sane. jdk/submit
job expected good before push.

Bug:
https://bugs.openjdk.java.net/browse/JDK-8221766
Webrev:
http://cr.openjdk.java.net/~rkennke/JDK-8221766/webrev.00/

Can I please get reviews for this change?

Roman




Re: RFR: JDK-8221766: Load-reference barriers for Shenandoah

2019-04-02 Thread Roman Kennke

Thanks, Erik!

Roman


Build change looks good.

/Erik

On 2019-04-02 14:12, Roman Kennke wrote:

(I am cross-posting this to build-dev and compiler-dev because this
contains some (trivial-ish) shared build and C2 changes. The C2 changes
are almost all reversals of Shenandoah-specific paths that have been
introduced in initial Shenandoah push.)

I would like to propose that we switch to what we came to call 'load
reference barrier' as new barrier scheme for Shenandoah GC.

The main difference is that instead of ensuring correct invariant when
we store anything into the heap (e.g. read-barrier before reads,
write-barrier before writes, plus a bunch of other stuff), we ensure the
strong invariance on objects when they get loaded, by employing what is
currently our write-barrier.

The reason why I'm proposing it is:
- simpler barrier interface
- easier to get good performance out of it
   ==> good for upcoming Graal (sup)port
- reduced maintenance burden (I intend to backport it all the way)

This has a number of advantages:
- Strong invariant means it's a lot easier to reason about the state of
GC and objects
- Much simpler barrier interface. Infact, a lot of stuff that we added
to barrier interfaces after JDK11 will now become unused: no need for
barriers on primitives, no need for object equality barriers, no need
for resolve barriers, etc. Also, some C2 stuff that we added for
Shenandoah can now be removed again. (Those are what comprise most
shared C2 changes.)
- Optimization is much easier: we currently put barriers 'down low'
close to their uses (which might be inside a hot loop), and then work
hard to optimize barriers upwards, e.g. out of loops. By using
load-ref-barriers, we would place them at the outermost site already.
Look how much code is removed from shenandoahSupport.cpp!
- No more need for object equals barriers.
- No more need for 'resolve' barriers.
- All barriers are now conditional, which opens up opportunity for
further optimization later on.
- we can re-enable the fast JNI getfield stuff
- we no longer need the nmethod initializer that initializes embedded
oops to to-space
- We no longer have the problem to use two registers for 'the same'
value (pre- and post-barrier).

The 'only' optimizations that we do in C2 are:
- Look upwards and see if barrier input indicates we don't actually need
the barrier. Would be the case for: constants, nulls, method parameters,
etc (anything that is not like a load). Even though we insert barriers
after loads, you'd be surprised to see how many loads actually disappear.
- Look downwards to check uses of the barrier. If it doesn't feed into
anything that requires a barrier, we can remove it.

Performance doesn't seem to be negatively impacted at all. Some
benchmarks benefit positively from it.

Testing: Testing: hotspot_gc_shenandoah, SPECjvm2008, SPECjbb2015, all
of them many times. This patch has baked in shenandoah/jdk for 1.5
months, undergone our rigorous CI, received various bug-fixes, we have
had a close look at the generated code to verify it is sane. jdk/submit
job expected good before push.

Bug:
https://bugs.openjdk.java.net/browse/JDK-8221766
Webrev:
http://cr.openjdk.java.net/~rkennke/JDK-8221766/webrev.00/

Can I please get reviews for this change?

Roman




RFR: JDK-8221766: Load-reference barriers for Shenandoah

2019-04-02 Thread Roman Kennke
(I am cross-posting this to build-dev and compiler-dev because this
contains some (trivial-ish) shared build and C2 changes. The C2 changes
are almost all reversals of Shenandoah-specific paths that have been
introduced in initial Shenandoah push.)

I would like to propose that we switch to what we came to call 'load
reference barrier' as new barrier scheme for Shenandoah GC.

The main difference is that instead of ensuring correct invariant when
we store anything into the heap (e.g. read-barrier before reads,
write-barrier before writes, plus a bunch of other stuff), we ensure the
strong invariance on objects when they get loaded, by employing what is
currently our write-barrier.

The reason why I'm proposing it is:
- simpler barrier interface
- easier to get good performance out of it
  ==> good for upcoming Graal (sup)port
- reduced maintenance burden (I intend to backport it all the way)

This has a number of advantages:
- Strong invariant means it's a lot easier to reason about the state of
GC and objects
- Much simpler barrier interface. Infact, a lot of stuff that we added
to barrier interfaces after JDK11 will now become unused: no need for
barriers on primitives, no need for object equality barriers, no need
for resolve barriers, etc. Also, some C2 stuff that we added for
Shenandoah can now be removed again. (Those are what comprise most
shared C2 changes.)
- Optimization is much easier: we currently put barriers 'down low'
close to their uses (which might be inside a hot loop), and then work
hard to optimize barriers upwards, e.g. out of loops. By using
load-ref-barriers, we would place them at the outermost site already.
Look how much code is removed from shenandoahSupport.cpp!
- No more need for object equals barriers.
- No more need for 'resolve' barriers.
- All barriers are now conditional, which opens up opportunity for
further optimization later on.
- we can re-enable the fast JNI getfield stuff
- we no longer need the nmethod initializer that initializes embedded
oops to to-space
- We no longer have the problem to use two registers for 'the same'
value (pre- and post-barrier).

The 'only' optimizations that we do in C2 are:
- Look upwards and see if barrier input indicates we don't actually need
the barrier. Would be the case for: constants, nulls, method parameters,
etc (anything that is not like a load). Even though we insert barriers
after loads, you'd be surprised to see how many loads actually disappear.
- Look downwards to check uses of the barrier. If it doesn't feed into
anything that requires a barrier, we can remove it.

Performance doesn't seem to be negatively impacted at all. Some
benchmarks benefit positively from it.

Testing: Testing: hotspot_gc_shenandoah, SPECjvm2008, SPECjbb2015, all
of them many times. This patch has baked in shenandoah/jdk for 1.5
months, undergone our rigorous CI, received various bug-fixes, we have
had a close look at the generated code to verify it is sane. jdk/submit
job expected good before push.

Bug:
https://bugs.openjdk.java.net/browse/JDK-8221766
Webrev:
http://cr.openjdk.java.net/~rkennke/JDK-8221766/webrev.00/

Can I please get reviews for this change?

Roman




Re: test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java failing with -XX:+UseShenandoahGC on x86_64

2019-01-28 Thread Roman Kennke
Isn't anything pushed to 12 (kindof) automatically pushed to 13 too,
eventually?

Roman

> I've seen this has just been pushed to 12 but I think we'd need this
> to 13 too, but:
> 
> https://bugs.openjdk.java.net/browse/JDK-8217873
> 
> has been resolved as duplicate.
> 
> I can help reopening it and pushing the fix to 13 if necessary?
> 
> Thanks,
> Bernard
> 
> On Sat, 26 Jan 2019 at 16:47, B. Blaser  wrote:
>>
>> Hi,
>>
>> Maybe you're already aware of this failing test, but it seems that
>> -XX:+UnlockExperimentalVMOptions is necessary with
>> -XX:+UseShenandoahGC as here under.
>>
>> Regards,
>> Bernard
>>
>> diff --git a/test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java
>> b/test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java
>> --- a/test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java
>> +++ b/test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java
>> @@ -1,5 +1,5 @@
>>  /*
>> - * Copyright (c) 2014, 2018, Oracle and/or its affiliates. All rights 
>> reserved.
>> + * Copyright (c) 2014, 2019, Oracle and/or its affiliates. All rights 
>> reserved.
>>   * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
>>   *
>>   * This code is free software; you can redistribute it and/or modify it
>> @@ -63,7 +63,7 @@
>>  testCompressedOopsModes(args, "-XX:+UseParallelGC");
>>  testCompressedOopsModes(args, "-XX:+UseParallelOldGC");
>>  if (GC.Shenandoah.isSupported()) {
>> -testCompressedOopsModes(args, "-XX:+UseShenandoahGC");
>> +testCompressedOopsModes(args,
>> "-XX:+UnlockExperimentalVMOptions", "-XX:+UseShenandoahGC");
>>  }
>>  }


Re: RFR (XS) 8215356: Disable x86_32 Shenandoah build to avoid hotspot/tier1 failures

2018-12-13 Thread Roman Kennke
ok by me. Thanks!
Roman

Am 13.12.18 um 14:58 schrieb Aleksey Shipilev:
> Bug;
>   https://bugs.openjdk.java.net/browse/JDK-8215356
> 
> There are lots of hotspot/tier1 tests on x86_32 caused by Shenandoah. This 
> mode is experimental, and
> we can disable it without prejudice to make tests clean. Once x86_32 support 
> is fixed, we can
> re-enable it back.
> 
> Fix:
> 
> diff -r 748cab25d902 -r f9d158363d15 make/autoconf/hotspot.m4
> --- a/make/autoconf/hotspot.m4  Thu Dec 13 14:39:16 2018 +0100
> +++ b/make/autoconf/hotspot.m4  Thu Dec 13 14:48:30 2018 +0100
> @@ -327,7 +327,7 @@
> 
># Only enable Shenandoah on supported arches
>AC_MSG_CHECKING([if shenandoah can be built])
> -  if test "x$OPENJDK_TARGET_CPU_ARCH" = "xx86" || test 
> "x$OPENJDK_TARGET_CPU" = "xaarch64" ; then
> +  if test "x$OPENJDK_TARGET_CPU" = "xx86_64" || test "x$OPENJDK_TARGET_CPU" 
> = "xaarch64" ; then
>  AC_MSG_RESULT([yes])
>else
>  DISABLED_JVM_FEATURES="$DISABLED_JVM_FEATURES shenandoahgc"
> 
> Testing: x86_64 tier1_gc_shenandoah, x86_32 hotspot/tier1.
> 
> Thanks,
> -Aleksey
> 



Re: RFR (round 6), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-08 Thread Roman Kennke
Anybody has an idea what's up with the test below? As far as I can tell,
it passes when I run locally.

Thanks,
Roman

mach5-one-rkennke-JDK-8214259-20181208-1149-13785: FAILED, Failed tests: 1

Build Details: 2018-12-08-1147316.roman.source
1 Failed Test
TestTierPlatformKeywordsDescription Task
compiler/aot/calls/fromAot/AotInvokeDynamic2AotTest.javatier1
linux-x64-debug othervm driver  Exception: java.lang.AssertionError:
duplicate classes for name Ljava/lang/Object;, fingerprints old: ...,
new: ..., klass pointers old: meta{HotSpotType}, new:
meta{HotSpotType}   task
Mach5 Tasks Results Summary

FAILED: 0
EXECUTED_WITH_FAILURE: 1
NA: 0
KILLED: 0
PASSED: 75
UNABLE_TO_RUN: 0
Test

1 Executed with failure

tier1-debug-open_test_hotspot_jtreg_tier1_compiler_not_xcomp-linux-x64-debug-37
Results: total: 73, passed: 72, failed: 1, skipped: 0



Am 08.12.18 um 12:33 schrieb Roman Kennke:
> Zhengyu's and Thomas' TaskQueue stuff landed in jdk/jdk and we
> integrated it into shenandoah/jdk and in the master patch and webrevs.
> I'm updating the webrevs in-place in:
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/06/
> 
> This is going to be the final webrev that I intend to push on Monday,
> unless somebody stops me or we face serious merge conflicts by then.
> 
> I've just pushed this to jdk/submit for testing. I will also do more
> testing locally over the weekend.
> 
> Thanks everybody for reviewing and helping and being patient!
> 
> Cheers,
> Roman
> 
>> Here comes round 6, possibly/hopefully the last round of webrevs for
>> upstreaming Shenandoah:
>>
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/06/
>>
>> It incorporates:
>> - renames vm_operations_shenandoah* to shenandoahVMOperations*, as
>> suggested by Coleen
>> - reshuffles gcCause in shared-gc SA as suggested by Per
>> - print number of threads in SA, as suggested by Jini
>> - fixed a problem in webrev generation that did not track move of
>> CriticalNativeStress.java and CriticalNativeArgs.java
>>
>> The CSR has been approved, the JEP moved to target jdk12, and I got
>> positive reviews for all parts. I intend to push this early next week,
>> unless somebody stops me. If Zhengyu's and Thomas' TaskQueue change goes
>> in by then, I'll incorporate it.
>>
>> Thanks everybody for reviewing and reviewing and reviewing again ;-)
>>
>> Cheers,
>> Roman
>>
>>> Round 5 of Shenandoah review includes:
>>> - A fix for the @requires tag in TestFullGCCountTest.java. It should be
>>> correct now. We believe the CMS @requires was also not quite right and
>>> fixed it the same.
>>>
>>> It reads now: Don't run this test if:
>>>  - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
>>> true, as set by harness
>>>  - Actual GC set by harness is Shenandoah *and*
>>> ExplicitGCInvokesConcurrent is not set false by harness (it's true by
>>> default in Shenandoah, so this needs to be double-inverteed).
>>>
>>> The @requires for CMS was wrong before (we think), because it would also
>>> filter defaultGC + ExplicitGCInvokesConcurrent.
>>>
>>> - Sorting of macros was fixed, as was pointed out by Per
>>> - Some stuff was added to SA, as suggested by Jini
>>> - Rebased on most current jdk/jdk code
>>>
>>> Webrevs:
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/
>>>
>>> I also need reviews from GC reviewers for the CSR:
>>> https://bugs.openjdk.java.net/browse/JDK-8214349
>>>
>>> I already got reviews for:
>>> [x] shared-runtime (coleenp)
>>> [x] shared-compiler (kvn)
>>>
>>> I got reviews for shared-build, but an earlier version, so maybe makes
>>> sense to look over this again. Erik J, Magnus?
>>>
>>> I still need approvals for:
>>> [ ] shared-build  (kvn, erikj, ihse, pliden)
>>> [ ] shared-gc (pliden, kbarrett)
>>> [ ] shared-serviceability (jgeorge, pliden)
>>> [ ] shared-tests  (lmesnik, pliden)
>>> [ ] shenandoah-gc
>>> [ ] shenandoah-tests
>>>
>>>
>>> Thanks for your patience and ongoing support!
>>>
>>> Cheers,
>>> Roman
>>>
>>>
>>>> Hi all,
>>>>
>>>> here comes round 4 of Shenandoah upstreaming review:
>>>>
>>>> This includes fixes for the issues that Per brought up:
>>>> - Verify and gracefully reject dangerous flags combinatio

Re: RFR (round 6), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-08 Thread Roman Kennke
Zhengyu's and Thomas' TaskQueue stuff landed in jdk/jdk and we
integrated it into shenandoah/jdk and in the master patch and webrevs.
I'm updating the webrevs in-place in:

http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/06/

This is going to be the final webrev that I intend to push on Monday,
unless somebody stops me or we face serious merge conflicts by then.

I've just pushed this to jdk/submit for testing. I will also do more
testing locally over the weekend.

Thanks everybody for reviewing and helping and being patient!

Cheers,
Roman

> Here comes round 6, possibly/hopefully the last round of webrevs for
> upstreaming Shenandoah:
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/06/
> 
> It incorporates:
> - renames vm_operations_shenandoah* to shenandoahVMOperations*, as
> suggested by Coleen
> - reshuffles gcCause in shared-gc SA as suggested by Per
> - print number of threads in SA, as suggested by Jini
> - fixed a problem in webrev generation that did not track move of
> CriticalNativeStress.java and CriticalNativeArgs.java
> 
> The CSR has been approved, the JEP moved to target jdk12, and I got
> positive reviews for all parts. I intend to push this early next week,
> unless somebody stops me. If Zhengyu's and Thomas' TaskQueue change goes
> in by then, I'll incorporate it.
> 
> Thanks everybody for reviewing and reviewing and reviewing again ;-)
> 
> Cheers,
> Roman
> 
>> Round 5 of Shenandoah review includes:
>> - A fix for the @requires tag in TestFullGCCountTest.java. It should be
>> correct now. We believe the CMS @requires was also not quite right and
>> fixed it the same.
>>
>> It reads now: Don't run this test if:
>>  - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
>> true, as set by harness
>>  - Actual GC set by harness is Shenandoah *and*
>> ExplicitGCInvokesConcurrent is not set false by harness (it's true by
>> default in Shenandoah, so this needs to be double-inverteed).
>>
>> The @requires for CMS was wrong before (we think), because it would also
>> filter defaultGC + ExplicitGCInvokesConcurrent.
>>
>> - Sorting of macros was fixed, as was pointed out by Per
>> - Some stuff was added to SA, as suggested by Jini
>> - Rebased on most current jdk/jdk code
>>
>> Webrevs:
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/
>>
>> I also need reviews from GC reviewers for the CSR:
>> https://bugs.openjdk.java.net/browse/JDK-8214349
>>
>> I already got reviews for:
>> [x] shared-runtime (coleenp)
>> [x] shared-compiler (kvn)
>>
>> I got reviews for shared-build, but an earlier version, so maybe makes
>> sense to look over this again. Erik J, Magnus?
>>
>> I still need approvals for:
>> [ ] shared-build  (kvn, erikj, ihse, pliden)
>> [ ] shared-gc (pliden, kbarrett)
>> [ ] shared-serviceability (jgeorge, pliden)
>> [ ] shared-tests  (lmesnik, pliden)
>> [ ] shenandoah-gc
>> [ ] shenandoah-tests
>>
>>
>> Thanks for your patience and ongoing support!
>>
>> Cheers,
>> Roman
>>
>>
>>> Hi all,
>>>
>>> here comes round 4 of Shenandoah upstreaming review:
>>>
>>> This includes fixes for the issues that Per brought up:
>>> - Verify and gracefully reject dangerous flags combinations that
>>> disables required barriers
>>> - Revisited @requires filters in tests
>>> - Trim unused code from Shenandoah's SA impl
>>> - Move ShenandoahGCTracer to gc/shenandoah
>>> - Fix ordering of GC names in various files
>>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>>
>>> Thanks everybody for taking time to review this!
>>> Roman
>>>
 Hello all,

 Thanks so far for all the reviews and support!

 I forgot to update the 'round' yesterday. We are in round 3 now :-)
 Also, I fixed the numbering of my webrevs to match the review-round. ;-)

 Things we've changed today:
 - We moved shenandoah-specific code out of .ad files into our own .ad
 files under gc/shenandoah (in shenandoah-gc), how cool is that? This
 requires an addition in build machinery though, see
 make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
 - Improved zero-disabling and build-code-simplification as suggested by
 Magnus and Per
 - Cleaned up some leftovers in C2
 - Improved C2 loop opts code by introducing another APIs in
 BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
 - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
 - We would all very much prefer to keep ShenandoahXYZNode names, as
 noted earlier. This stuff is Shenandoah-specific, so let's just call it
 that.
 - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
 - Rebased on jdk-12+22

 - Question: let us know if you need separate RFE for the new BSC2 APIs?

 http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/

 Thanks,
 Roman

> 

RFR (round 6), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-07 Thread Roman Kennke
Here comes round 6, possibly/hopefully the last round of webrevs for
upstreaming Shenandoah:

http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/06/

It incorporates:
- renames vm_operations_shenandoah* to shenandoahVMOperations*, as
suggested by Coleen
- reshuffles gcCause in shared-gc SA as suggested by Per
- print number of threads in SA, as suggested by Jini
- fixed a problem in webrev generation that did not track move of
CriticalNativeStress.java and CriticalNativeArgs.java

The CSR has been approved, the JEP moved to target jdk12, and I got
positive reviews for all parts. I intend to push this early next week,
unless somebody stops me. If Zhengyu's and Thomas' TaskQueue change goes
in by then, I'll incorporate it.

Thanks everybody for reviewing and reviewing and reviewing again ;-)

Cheers,
Roman

> Round 5 of Shenandoah review includes:
> - A fix for the @requires tag in TestFullGCCountTest.java. It should be
> correct now. We believe the CMS @requires was also not quite right and
> fixed it the same.
> 
> It reads now: Don't run this test if:
>  - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
> true, as set by harness
>  - Actual GC set by harness is Shenandoah *and*
> ExplicitGCInvokesConcurrent is not set false by harness (it's true by
> default in Shenandoah, so this needs to be double-inverteed).
> 
> The @requires for CMS was wrong before (we think), because it would also
> filter defaultGC + ExplicitGCInvokesConcurrent.
> 
> - Sorting of macros was fixed, as was pointed out by Per
> - Some stuff was added to SA, as suggested by Jini
> - Rebased on most current jdk/jdk code
> 
> Webrevs:
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/
> 
> I also need reviews from GC reviewers for the CSR:
> https://bugs.openjdk.java.net/browse/JDK-8214349
> 
> I already got reviews for:
> [x] shared-runtime (coleenp)
> [x] shared-compiler (kvn)
> 
> I got reviews for shared-build, but an earlier version, so maybe makes
> sense to look over this again. Erik J, Magnus?
> 
> I still need approvals for:
> [ ] shared-build  (kvn, erikj, ihse, pliden)
> [ ] shared-gc (pliden, kbarrett)
> [ ] shared-serviceability (jgeorge, pliden)
> [ ] shared-tests  (lmesnik, pliden)
> [ ] shenandoah-gc
> [ ] shenandoah-tests
> 
> 
> Thanks for your patience and ongoing support!
> 
> Cheers,
> Roman
> 
> 
>> Hi all,
>>
>> here comes round 4 of Shenandoah upstreaming review:
>>
>> This includes fixes for the issues that Per brought up:
>> - Verify and gracefully reject dangerous flags combinations that
>> disables required barriers
>> - Revisited @requires filters in tests
>> - Trim unused code from Shenandoah's SA impl
>> - Move ShenandoahGCTracer to gc/shenandoah
>> - Fix ordering of GC names in various files
>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>
>> Thanks everybody for taking time to review this!
>> Roman
>>
>>> Hello all,
>>>
>>> Thanks so far for all the reviews and support!
>>>
>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
>>>
>>> Things we've changed today:
>>> - We moved shenandoah-specific code out of .ad files into our own .ad
>>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>>> requires an addition in build machinery though, see
>>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>>> - Improved zero-disabling and build-code-simplification as suggested by
>>> Magnus and Per
>>> - Cleaned up some leftovers in C2
>>> - Improved C2 loop opts code by introducing another APIs in
>>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
>>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>>> that.
>>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>>> - Rebased on jdk-12+22
>>>
>>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>>
>>> Thanks,
>>> Roman
>>>
 Alright, we fixed:
 - The minor issues that Kim reported in shared-gc
 - A lot of fixes in shared-tests according to Leonid's review
 - Disabled SA heapdumping similar to ZGC as Per suggested

 Some notes:
 Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
 correct. The @requires there means to exclude runs with both CMS and
 ExplicitGCInvokesConcurrent at the same time, because that would be
 (expectedly) failing. It can run CMS, default GC and any other GC just
 fine. Adding the same clause for Shenandoah means the same, and filters
 the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
 made the condition 

Re: RFR (round 5), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-06 Thread Roman Kennke
Hi Coleen,

> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/shenandoah-gc/src/hotspot/share/gc/shenandoah/vm_operations_shenandoah.cpp.html
> 
> 
> Can you rename these to shenandoahVMOperations.hpp/cpp to match the
> newly agreed upon naming convention for this?
> 
> See 8214791: Consistently name gc files containing VM operations [Was:
> Re: RFR (S): 8214791: Rename vm_operations_g1* files to g1VMOperations*]

Doing so:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-December/008595.html

I will integrate this in next round of webrevs.

I expect the next will be the final round of webrevs. I got positive
reviews for all shared-* parts, I'm waiting for shenandoah-* reviews
from Shenandoah devs, plus Zhengyu+Thomas' TaskQueue stuff to arrive in
upstream jdk. The CSR for Shenandoah flags has been approved, and the
JEP should be moved to targeted JDK12 ~tomorrow. I intend/expect to push
Shenandoah during the next couple of days, unless somebody speaks up
until then :-)

Thanks,
Roman

> 
> thanks,
> Coleen
> 
> On 12/4/18 3:37 PM, Roman Kennke wrote:
>> Thanks, Leonid, for reviewing!
>>
>> Roman
>>
>>
>>> Hi
>>>
>>> The shared tests changes looks good for me. Thank you for fixing and
>>> testing different combinations.
>>>
>>> Leonid
>>>
>>>> On Dec 3, 2018, at 11:10 PM, Roman Kennke  wrote:
>>>>
>>>> Round 5 of Shenandoah review includes:
>>>> - A fix for the @requires tag in TestFullGCCountTest.java. It should be
>>>> correct now. We believe the CMS @requires was also not quite right and
>>>> fixed it the same.
>>>>
>>>> It reads now: Don't run this test if:
>>>> - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
>>>> true, as set by harness
>>>> - Actual GC set by harness is Shenandoah *and*
>>>> ExplicitGCInvokesConcurrent is not set false by harness (it's true by
>>>> default in Shenandoah, so this needs to be double-inverteed).
>>>>
>>>> The @requires for CMS was wrong before (we think), because it would
>>>> also
>>>> filter defaultGC + ExplicitGCInvokesConcurrent.
>>>>
>>>> - Sorting of macros was fixed, as was pointed out by Per
>>>> - Some stuff was added to SA, as suggested by Jini
>>>> - Rebased on most current jdk/jdk code
>>>>
>>>> Webrevs:
>>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/
>>>>
>>>> I also need reviews from GC reviewers for the CSR:
>>>> https://bugs.openjdk.java.net/browse/JDK-8214349
>>>>
>>>> I already got reviews for:
>>>> [x] shared-runtime (coleenp)
>>>> [x] shared-compiler (kvn)
>>>>
>>>> I got reviews for shared-build, but an earlier version, so maybe makes
>>>> sense to look over this again. Erik J, Magnus?
>>>>
>>>> I still need approvals for:
>>>> [ ] shared-build  (kvn, erikj, ihse, pliden)
>>>> [ ] shared-gc (pliden, kbarrett)
>>>> [ ] shared-serviceability (jgeorge, pliden)
>>>> [ ] shared-tests  (lmesnik, pliden)
>>>> [ ] shenandoah-gc
>>>> [ ] shenandoah-tests
>>>>
>>>>
>>>> Thanks for your patience and ongoing support!
>>>>
>>>> Cheers,
>>>> Roman
>>>>
>>>>
>>>>> Hi all,
>>>>>
>>>>> here comes round 4 of Shenandoah upstreaming review:
>>>>>
>>>>> This includes fixes for the issues that Per brought up:
>>>>> - Verify and gracefully reject dangerous flags combinations that
>>>>> disables required barriers
>>>>> - Revisited @requires filters in tests
>>>>> - Trim unused code from Shenandoah's SA impl
>>>>> - Move ShenandoahGCTracer to gc/shenandoah
>>>>> - Fix ordering of GC names in various files
>>>>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>>>>
>>>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>>>>
>>>>> Thanks everybody for taking time to review this!
>>>>> Roman
>>>>>
>>>>>> Hello all,
>>>>>>
>>>>>> Thanks so far for all the reviews and support!
>>>>>>
>>>>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>>>>> Also, I fixed the

Re: RFR (round 5), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-04 Thread Roman Kennke
Thanks, Leonid, for reviewing!

Roman


> Hi
> 
> The shared tests changes looks good for me. Thank you for fixing and testing 
> different combinations. 
> 
> Leonid
> 
>> On Dec 3, 2018, at 11:10 PM, Roman Kennke  wrote:
>>
>> Round 5 of Shenandoah review includes:
>> - A fix for the @requires tag in TestFullGCCountTest.java. It should be
>> correct now. We believe the CMS @requires was also not quite right and
>> fixed it the same.
>>
>> It reads now: Don't run this test if:
>> - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
>> true, as set by harness
>> - Actual GC set by harness is Shenandoah *and*
>> ExplicitGCInvokesConcurrent is not set false by harness (it's true by
>> default in Shenandoah, so this needs to be double-inverteed).
>>
>> The @requires for CMS was wrong before (we think), because it would also
>> filter defaultGC + ExplicitGCInvokesConcurrent.
>>
>> - Sorting of macros was fixed, as was pointed out by Per
>> - Some stuff was added to SA, as suggested by Jini
>> - Rebased on most current jdk/jdk code
>>
>> Webrevs:
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/
>>
>> I also need reviews from GC reviewers for the CSR:
>> https://bugs.openjdk.java.net/browse/JDK-8214349
>>
>> I already got reviews for:
>> [x] shared-runtime (coleenp)
>> [x] shared-compiler (kvn)
>>
>> I got reviews for shared-build, but an earlier version, so maybe makes
>> sense to look over this again. Erik J, Magnus?
>>
>> I still need approvals for:
>> [ ] shared-build  (kvn, erikj, ihse, pliden)
>> [ ] shared-gc (pliden, kbarrett)
>> [ ] shared-serviceability (jgeorge, pliden)
>> [ ] shared-tests  (lmesnik, pliden)
>> [ ] shenandoah-gc
>> [ ] shenandoah-tests
>>
>>
>> Thanks for your patience and ongoing support!
>>
>> Cheers,
>> Roman
>>
>>
>>> Hi all,
>>>
>>> here comes round 4 of Shenandoah upstreaming review:
>>>
>>> This includes fixes for the issues that Per brought up:
>>> - Verify and gracefully reject dangerous flags combinations that
>>> disables required barriers
>>> - Revisited @requires filters in tests
>>> - Trim unused code from Shenandoah's SA impl
>>> - Move ShenandoahGCTracer to gc/shenandoah
>>> - Fix ordering of GC names in various files
>>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>>
>>> Thanks everybody for taking time to review this!
>>> Roman
>>>
>>>> Hello all,
>>>>
>>>> Thanks so far for all the reviews and support!
>>>>
>>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>>> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
>>>>
>>>> Things we've changed today:
>>>> - We moved shenandoah-specific code out of .ad files into our own .ad
>>>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>>>> requires an addition in build machinery though, see
>>>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>>>> - Improved zero-disabling and build-code-simplification as suggested by
>>>> Magnus and Per
>>>> - Cleaned up some leftovers in C2
>>>> - Improved C2 loop opts code by introducing another APIs in
>>>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>>>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
>>>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>>>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>>>> that.
>>>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>>>> - Rebased on jdk-12+22
>>>>
>>>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>>>
>>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>>>
>>>> Thanks,
>>>> Roman
>>>>
>>>>> Alright, we fixed:
>>>>> - The minor issues that Kim reported in shared-gc
>>>>> - A lot of fixes in shared-tests according to Leonid's review
>>>>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>>>>
>>>>> Some notes:
>>>>> Leonid

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-04 Thread Roman Kennke
Hi Per,

>> Round 5 of Shenandoah review includes:
>> - A fix for the @requires tag in TestFullGCCountTest.java. It should be
>> correct now. We believe the CMS @requires was also not quite right and
>> fixed it the same.
>>
>> It reads now: Don't run this test if:
>>   - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
>> true, as set by harness
>>   - Actual GC set by harness is Shenandoah *and*
>> ExplicitGCInvokesConcurrent is not set false by harness (it's true by
>> default in Shenandoah, so this needs to be double-inverteed).
>>
>> The @requires for CMS was wrong before (we think), because it would also
>> filter defaultGC + ExplicitGCInvokesConcurrent.
>>
>> - Sorting of macros was fixed, as was pointed out by Per
>> - Some stuff was added to SA, as suggested by Jini
>> - Rebased on most current jdk/jdk code
>>
>> Webrevs:
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/
>>
>> I also need reviews from GC reviewers for the CSR:
>> https://bugs.openjdk.java.net/browse/JDK-8214349
>>
>> I already got reviews for:
>> [x] shared-runtime (coleenp)
>> [x] shared-compiler (kvn)
>>
>> I got reviews for shared-build, but an earlier version, so maybe makes
>> sense to look over this again. Erik J, Magnus?
>>
>> I still need approvals for:
>> [ ] shared-build  (kvn, erikj, ihse, pliden)
>> [ ] shared-gc (pliden, kbarrett)
>> [ ] shared-serviceability (jgeorge, pliden)
>> [ ] shared-tests  (lmesnik, pliden)
> 
> The above parts look good to me. Reviewed.

Great! Thanks!

> Just one tiny nit (and I don't need to see a new webrev for this):
> 
> In src/hotspot/share/gc/shared/gcCause.cpp you have this:
> 
> +    case _shenandoah_allocation_failure_evac:
> +  return "Allocation Failure During Evac";
> +
> +    case _shenandoah_stop_vm:
> +  return "Stopping VM";
> +
> +    case _shenandoah_concurrent_gc:
> +  return "Shenandoah Concurrent GC";
> +
> +    case _shenandoah_traversal_gc:
> +  return "Shenandoah Traversal GC";
> +
> +    case _shenandoah_upgrade_to_full_gc:
> +  return "Shenandoah Upgrade To Full GC";
> +
> 
> And in
> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shared/GCCause.java
> you have this:
> 
> +  _shenandoah_stop_vm ("Stop VM"),
> +  _shenandoah_allocation_failure_evac ("Allocation Failure During
> Evacuation"),
> +  _shenandoah_concurrent_gc ("Concurrent GC"),
> +  _shenandoah_traversal_gc ("Traversal GC"),
> +  _shenandoah_upgrade_to_full_gc ("Upgrade to Full GC"),
> 
> It would be good to have the exact same strings in both places. There
> are currently small differences in all of them. "Evac" vs "Evacuation",
> "Stop" vs "Stopping", "Shenandoah" vs "", etc.
> 
> May I also suggest that you skip "Shenandoah" in things like "Shenandoah
> Concurrent GC" as I kind of think it's implied by the context. But I
> also know that CMS/G1 isn't consistent on this point. An alternative
> would be to add "Shenandoah" to all of the strings to keep things
> consistent, but I'm not sure I like that better. You decide.

I'm addressing it here:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-December/008572.html


>> [ ] shenandoah-gc
>> [ ] shenandoah-tests
> 
> I haven't looked very much on these parts, and I didn't plan to do so in
> detail right now. I think it's fine of the folks that have been working
> on the Shenandoah code reviewed this.

Yes, I think so too (except maybe stuff like shenandoah_globals.hpp, but
that was already reviewed I think).

Thanks for reviewing, Per!

Roman

> cheers,
> Per
> 
>>
>>
>> Thanks for your patience and ongoing support!
>>
>> Cheers,
>> Roman
>>
>>> Hi all,
>>>
>>> here comes round 4 of Shenandoah upstreaming review:
>>>
>>> This includes fixes for the issues that Per brought up:
>>> - Verify and gracefully reject dangerous flags combinations that
>>> disables required barriers
>>> - Revisited @requires filters in tests
>>> - Trim unused code from Shenandoah's SA impl
>>> - Move ShenandoahGCTracer to gc/shenandoah
>>> - Fix ordering of GC names in various files
>>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>>
>>> Thanks everybody for taking time to review this!
>>> Roman
>>>
 Hello all,

 Thanks so far for all the reviews and support!

 I forgot to update the 'round' yesterday. We are in round 3 now :-)
 Also, I fixed the numbering of my webrevs to match the review-round.
 ;-)

 Things we've changed today:
 - We moved shenandoah-specific code out of .ad files into our own .ad
 files under gc/shenandoah (in shenandoah-gc), how cool is that? This
 requires an addition in build machinery though, see
 make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
 - Improved zero-disabling and build-code-simplification as suggested by
 Magnus and Per
 - Cleaned up some leftovers in C2
 - Improved C2 loop opts code by introducing 

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-04 Thread Roman Kennke
Hi Magnus,

>> I got reviews for shared-build, but an earlier version, so maybe makes
>> sense to look over this again. Erik J, Magnus?
> 
> Build changes look good. 

Thanks, Magnus!

Roman




Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-04 Thread Roman Kennke
Hi Jini,

> Thank you for making the changes. The SA portion looks good to me.

Thank you!

> One
> nit though:
> 
> In
> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/tools/HeapSummary.java,
> in printGCAlgorithm(), does displaying the nbr of Parallel GC threads
> not make sense for Shenandoah (like it is for G1, ZGC, etc)?

I suppose it does. I will add it.

Thanks,
Roman

> Thank you,
> Jini.
> 
> 
> On 12/4/2018 12:57 AM, Roman Kennke wrote:
>> Round 5 of Shenandoah review includes:
>> - A fix for the @requires tag in TestFullGCCountTest.java. It should be
>> correct now. We believe the CMS @requires was also not quite right and
>> fixed it the same.
>>
>> It reads now: Don't run this test if:
>>   - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
>> true, as set by harness
>>   - Actual GC set by harness is Shenandoah *and*
>> ExplicitGCInvokesConcurrent is not set false by harness (it's true by
>> default in Shenandoah, so this needs to be double-inverteed).
>>
>> The @requires for CMS was wrong before (we think), because it would also
>> filter defaultGC + ExplicitGCInvokesConcurrent.
>>
>> - Sorting of macros was fixed, as was pointed out by Per
>> - Some stuff was added to SA, as suggested by Jini
>> - Rebased on most current jdk/jdk code
>>
>> Webrevs:
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/
>>
>> I also need reviews from GC reviewers for the CSR:
>> https://bugs.openjdk.java.net/browse/JDK-8214349
>>
>> I already got reviews for:
>> [x] shared-runtime (coleenp)
>> [x] shared-compiler (kvn)
>>
>> I got reviews for shared-build, but an earlier version, so maybe makes
>> sense to look over this again. Erik J, Magnus?
>>
>> I still need approvals for:
>> [ ] shared-build  (kvn, erikj, ihse, pliden)
>> [ ] shared-gc (pliden, kbarrett)
>> [ ] shared-serviceability (jgeorge, pliden)
>> [ ] shared-tests  (lmesnik, pliden)
>> [ ] shenandoah-gc
>> [ ] shenandoah-tests
>>
>>
>> Thanks for your patience and ongoing support!
>>
>> Cheers,
>> Roman
>>
>>> Hi all,
>>>
>>> here comes round 4 of Shenandoah upstreaming review:
>>>
>>> This includes fixes for the issues that Per brought up:
>>> - Verify and gracefully reject dangerous flags combinations that
>>> disables required barriers
>>> - Revisited @requires filters in tests
>>> - Trim unused code from Shenandoah's SA impl
>>> - Move ShenandoahGCTracer to gc/shenandoah
>>> - Fix ordering of GC names in various files
>>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>>
>>> Thanks everybody for taking time to review this!
>>> Roman
>>>
>>>> Hello all,
>>>>
>>>> Thanks so far for all the reviews and support!
>>>>
>>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>>> Also, I fixed the numbering of my webrevs to match the review-round.
>>>> ;-)
>>>>
>>>> Things we've changed today:
>>>> - We moved shenandoah-specific code out of .ad files into our own .ad
>>>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>>>> requires an addition in build machinery though, see
>>>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>>>> - Improved zero-disabling and build-code-simplification as suggested by
>>>> Magnus and Per
>>>> - Cleaned up some leftovers in C2
>>>> - Improved C2 loop opts code by introducing another APIs in
>>>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>>>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC
>>>> guards now.
>>>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>>>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>>>> that.
>>>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>>>> - Rebased on jdk-12+22
>>>>
>>>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>>>
>>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>>>
>>>> Thanks,
>>>> Roman
>>>>
>>>>> Alright, we fixed:
>>>>> - The minor issues that Kim reported 

RFR (round 5), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-03 Thread Roman Kennke
Round 5 of Shenandoah review includes:
- A fix for the @requires tag in TestFullGCCountTest.java. It should be
correct now. We believe the CMS @requires was also not quite right and
fixed it the same.

It reads now: Don't run this test if:
 - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
true, as set by harness
 - Actual GC set by harness is Shenandoah *and*
ExplicitGCInvokesConcurrent is not set false by harness (it's true by
default in Shenandoah, so this needs to be double-inverteed).

The @requires for CMS was wrong before (we think), because it would also
filter defaultGC + ExplicitGCInvokesConcurrent.

- Sorting of macros was fixed, as was pointed out by Per
- Some stuff was added to SA, as suggested by Jini
- Rebased on most current jdk/jdk code

Webrevs:
http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/

I also need reviews from GC reviewers for the CSR:
https://bugs.openjdk.java.net/browse/JDK-8214349

I already got reviews for:
[x] shared-runtime (coleenp)
[x] shared-compiler (kvn)

I got reviews for shared-build, but an earlier version, so maybe makes
sense to look over this again. Erik J, Magnus?

I still need approvals for:
[ ] shared-build  (kvn, erikj, ihse, pliden)
[ ] shared-gc (pliden, kbarrett)
[ ] shared-serviceability (jgeorge, pliden)
[ ] shared-tests  (lmesnik, pliden)
[ ] shenandoah-gc
[ ] shenandoah-tests


Thanks for your patience and ongoing support!

Cheers,
Roman


> Hi all,
> 
> here comes round 4 of Shenandoah upstreaming review:
> 
> This includes fixes for the issues that Per brought up:
> - Verify and gracefully reject dangerous flags combinations that
> disables required barriers
> - Revisited @requires filters in tests
> - Trim unused code from Shenandoah's SA impl
> - Move ShenandoahGCTracer to gc/shenandoah
> - Fix ordering of GC names in various files
> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
> 
> Thanks everybody for taking time to review this!
> Roman
> 
>> Hello all,
>>
>> Thanks so far for all the reviews and support!
>>
>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
>>
>> Things we've changed today:
>> - We moved shenandoah-specific code out of .ad files into our own .ad
>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>> requires an addition in build machinery though, see
>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>> - Improved zero-disabling and build-code-simplification as suggested by
>> Magnus and Per
>> - Cleaned up some leftovers in C2
>> - Improved C2 loop opts code by introducing another APIs in
>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>> that.
>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>> - Rebased on jdk-12+22
>>
>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>
>> Thanks,
>> Roman
>>
>>> Alright, we fixed:
>>> - The minor issues that Kim reported in shared-gc
>>> - A lot of fixes in shared-tests according to Leonid's review
>>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>>
>>> Some notes:
>>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>>> correct. The @requires there means to exclude runs with both CMS and
>>> ExplicitGCInvokesConcurrent at the same time, because that would be
>>> (expectedly) failing. It can run CMS, default GC and any other GC just
>>> fine. Adding the same clause for Shenandoah means the same, and filters
>>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>>> made the condition a bit clearer by avoiding triple-negation.
>>>
>>> See:
>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
>>>
>>> Per: Disabling the SA part for heapdumping makes 2 tests fail:
>>> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
>>> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
>>>
>>> we filter them for Shenandoah now. I'm wondering: how do you get past
>>> those with ZGC?
>>>
>>> See:
>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
>>>
>>> (Note to Leonid and tests reviewers: I'll add those related filters in
>>> next round).
>>>
>>> Vladimir: Roland integrated a bunch of changes to make loop* code look
>>> better. I can tell that we're not done with C2 yet. Can you look over
>>> the code and see what is ok, and especially what is not ok, so that we
>>> can focus our efforts on the relevant parts?
>>>
>>> Updated set of webrevs:
>>> 

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-03 Thread Roman Kennke
Round 5 of Shenandoah review includes:
- A fix for the @requires tag in TestFullGCCountTest.java. It should be
correct now. We believe the CMS @requires was also not quite right and
fixed it the same.

It reads now: Don't run this test if:
 - Actual GC set by harness is CMS *and* ExplicitGCInvokesConcurrent is
true, as set by harness
 - Actual GC set by harness is Shenandoah *and*
ExplicitGCInvokesConcurrent is not set false by harness (it's true by
default in Shenandoah, so this needs to be double-inverteed).

The @requires for CMS was wrong before (we think), because it would also
filter defaultGC + ExplicitGCInvokesConcurrent.

- Sorting of macros was fixed, as was pointed out by Per
- Some stuff was added to SA, as suggested by Jini
- Rebased on most current jdk/jdk code

Webrevs:
http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/05/

I also need reviews from GC reviewers for the CSR:
https://bugs.openjdk.java.net/browse/JDK-8214349

I already got reviews for:
[x] shared-runtime (coleenp)
[x] shared-compiler (kvn)

I got reviews for shared-build, but an earlier version, so maybe makes
sense to look over this again. Erik J, Magnus?

I still need approvals for:
[ ] shared-build  (kvn, erikj, ihse, pliden)
[ ] shared-gc (pliden, kbarrett)
[ ] shared-serviceability (jgeorge, pliden)
[ ] shared-tests  (lmesnik, pliden)
[ ] shenandoah-gc
[ ] shenandoah-tests


Thanks for your patience and ongoing support!

Cheers,
Roman

> Hi all,
> 
> here comes round 4 of Shenandoah upstreaming review:
> 
> This includes fixes for the issues that Per brought up:
> - Verify and gracefully reject dangerous flags combinations that
> disables required barriers
> - Revisited @requires filters in tests
> - Trim unused code from Shenandoah's SA impl
> - Move ShenandoahGCTracer to gc/shenandoah
> - Fix ordering of GC names in various files
> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
> 
> Thanks everybody for taking time to review this!
> Roman
> 
>> Hello all,
>>
>> Thanks so far for all the reviews and support!
>>
>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
>>
>> Things we've changed today:
>> - We moved shenandoah-specific code out of .ad files into our own .ad
>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>> requires an addition in build machinery though, see
>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>> - Improved zero-disabling and build-code-simplification as suggested by
>> Magnus and Per
>> - Cleaned up some leftovers in C2
>> - Improved C2 loop opts code by introducing another APIs in
>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>> that.
>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>> - Rebased on jdk-12+22
>>
>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>
>> Thanks,
>> Roman
>>
>>> Alright, we fixed:
>>> - The minor issues that Kim reported in shared-gc
>>> - A lot of fixes in shared-tests according to Leonid's review
>>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>>
>>> Some notes:
>>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>>> correct. The @requires there means to exclude runs with both CMS and
>>> ExplicitGCInvokesConcurrent at the same time, because that would be
>>> (expectedly) failing. It can run CMS, default GC and any other GC just
>>> fine. Adding the same clause for Shenandoah means the same, and filters
>>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>>> made the condition a bit clearer by avoiding triple-negation.
>>>
>>> See:
>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
>>>
>>> Per: Disabling the SA part for heapdumping makes 2 tests fail:
>>> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
>>> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
>>>
>>> we filter them for Shenandoah now. I'm wondering: how do you get past
>>> those with ZGC?
>>>
>>> See:
>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
>>>
>>> (Note to Leonid and tests reviewers: I'll add those related filters in
>>> next round).
>>>
>>> Vladimir: Roland integrated a bunch of changes to make loop* code look
>>> better. I can tell that we're not done with C2 yet. Can you look over
>>> the code and see what is ok, and especially what is not ok, so that we
>>> can focus our efforts on the relevant parts?
>>>
>>> Updated set of webrevs:
>>> 

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-03 Thread Roman Kennke
Hi Per,

Thanks for looking again.

I've fixed the ordering in shenandoah-dev:

http://cr.openjdk.java.net/~rkennke/fix-macro-order/webrev.00/

and it will apear in the next round of webrevs.

Thanks,
Roman


> Hi Roman,
> 
> On 11/30/18 10:00 PM, Roman Kennke wrote:
>> Hi all,
>>
>> here comes round 4 of Shenandoah upstreaming review:
>>
>> This includes fixes for the issues that Per brought up:
>> - Verify and gracefully reject dangerous flags combinations that
>> disables required barriers
>> - Revisited @requires filters in tests
>> - Trim unused code from Shenandoah's SA impl
>> - Move ShenandoahGCTracer to gc/shenandoah
>> - Fix ordering of GC names in various files
>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
> 
> Thanks for fixing. Looks good to me, except it looks like you missed
> adjusting the macro order in the following files:
>  src/hotspot/share/gc/shared/gc_globals.hpp
>  src/hotspot/share/gc/shared/vmStructs_gc.hpp
> 
> cheers,
> Per
> 
>>
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>
>> Thanks everybody for taking time to review this!
>> Roman
>>
>>> Hello all,
>>>
>>> Thanks so far for all the reviews and support!
>>>
>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
>>>
>>> Things we've changed today:
>>> - We moved shenandoah-specific code out of .ad files into our own .ad
>>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>>> requires an addition in build machinery though, see
>>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>>> - Improved zero-disabling and build-code-simplification as suggested by
>>> Magnus and Per
>>> - Cleaned up some leftovers in C2
>>> - Improved C2 loop opts code by introducing another APIs in
>>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards
>>> now.
>>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>>> that.
>>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>>> - Rebased on jdk-12+22
>>>
>>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>>
>>> Thanks,
>>> Roman
>>>
>>>> Alright, we fixed:
>>>> - The minor issues that Kim reported in shared-gc
>>>> - A lot of fixes in shared-tests according to Leonid's review
>>>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>>>
>>>> Some notes:
>>>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>>>> correct. The @requires there means to exclude runs with both CMS and
>>>> ExplicitGCInvokesConcurrent at the same time, because that would be
>>>> (expectedly) failing. It can run CMS, default GC and any other GC just
>>>> fine. Adding the same clause for Shenandoah means the same, and filters
>>>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>>>> made the condition a bit clearer by avoiding triple-negation.
>>>>
>>>> See:
>>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
>>>>
>>>>
>>>> Per: Disabling the SA part for heapdumping makes 2 tests fail:
>>>> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
>>>> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
>>>>
>>>> we filter them for Shenandoah now. I'm wondering: how do you get past
>>>> those with ZGC?
>>>>
>>>> See:
>>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
>>>>
>>>>
>>>> (Note to Leonid and tests reviewers: I'll add those related filters in
>>>> next round).
>>>>
>>>> Vladimir: Roland integrated a bunch of changes to make loop* code look
>>>> better. I can tell that we're not done with C2 yet. Can you look over
>>>> the code and see what is ok, and especially what is not ok, so that we
>>>> can focus our efforts on the relevant parts?
>>>>
>>>> Updated set of webrevs:
>>

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-03 Thread Roman Kennke
Hi Jini,

Thanks for your suggestions. I've added this to Shenandoah's dev:

http://cr.openjdk.java.net/~rkennke/shenandoah-sa/webrev.01/

and it will show up in next round of webrevs.

Thanks,
Roman


> A few comments on the SA changes:
> 
> ==> Could you please add the following lines in
> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/HSDB.java from line
> 1120 onwards to avoid the "[Unknown generation]" message with hsdb while
> displaying the Stack Memory for a mutator thread ?
> 
> else if (collHeap instanceof ShenandoahHeap) {
>    ShenandoahHeap heap = (ShenandoahHeap) collHeap;
>    anno = "ShenandoahHeap ";
>    bad = false;
> }
> 
> ==>
> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/tools/HeapSummary.java
> 
> The printGCAlgorithm() method would need to be updated to read in the
> UseShenandoahGC flag to avoid the default "Mark Sweep Compact GC" being
> displayed with jhsdb jmap -heap.
> 
> ==>
> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shared/GCName.java
> 
> Could you please add "Shenandoah" to the GCName enum list ?
> 
> ==>
> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shared/GCCause.java
> 
> Could you please update the GCCause enum values to include these:
> 
>     _shenandoah_stop_vm,
>     _shenandoah_allocation_failure_evac,
>     _shenandoah_concurrent_gc,
>     _shenandoah_traversal_gc,
>     _shenandoah_upgrade_to_full_gc,
> 
> ==> share/classes/sun/jvm/hotspot/runtime/VMOps.java
> 
> It would be good to add 'ShenandoahOperation' to the VMOps enum (though
> it is probably not in sync now).
> 
> Thank you,
> Jini.
> 
> On 12/1/2018 2:30 AM, Roman Kennke wrote:
>> Hi all,
>>
>> here comes round 4 of Shenandoah upstreaming review:
>>
>> This includes fixes for the issues that Per brought up:
>> - Verify and gracefully reject dangerous flags combinations that
>> disables required barriers
>> - Revisited @requires filters in tests
>> - Trim unused code from Shenandoah's SA impl
>> - Move ShenandoahGCTracer to gc/shenandoah
>> - Fix ordering of GC names in various files
>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>
>> Thanks everybody for taking time to review this!
>> Roman
>>
>>> Hello all,
>>>
>>> Thanks so far for all the reviews and support!
>>>
>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
>>>
>>> Things we've changed today:
>>> - We moved shenandoah-specific code out of .ad files into our own .ad
>>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>>> requires an addition in build machinery though, see
>>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>>> - Improved zero-disabling and build-code-simplification as suggested by
>>> Magnus and Per
>>> - Cleaned up some leftovers in C2
>>> - Improved C2 loop opts code by introducing another APIs in
>>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards
>>> now.
>>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>>> that.
>>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>>> - Rebased on jdk-12+22
>>>
>>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>>
>>> Thanks,
>>> Roman
>>>
>>>> Alright, we fixed:
>>>> - The minor issues that Kim reported in shared-gc
>>>> - A lot of fixes in shared-tests according to Leonid's review
>>>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>>>
>>>> Some notes:
>>>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>>>> correct. The @requires there means to exclude runs with both CMS and
>>>> ExplicitGCInvokesConcurrent at the same time, because that would be
>>>> (expectedly) failing. It can run CMS, default GC and any other GC just
>>>> fine. Adding the same clause for Shenandoah means the same, and filters
>>>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>>>> made the condition a

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-02 Thread Roman Kennke
>> Hello Vladimir,
>>
>>> Why you need to include shenandoahBarrierSetC2.hpp in /opto/classes.cpp
>>> ? Why not include only shenandoahSupport.hpp when new nodes are defined?
>>
>> I discussed this with my team. shenandoahBarrierSetC2.hpp is supposed to
>> be the entry point for external C2 code. No C2 code is supposed to
>> include shenandoahSupport.hpp, this is just an implementation detail of
>> shenandoahBarrierSetC2.hpp. This seems symmetrical to how ZGC includes
>> zBarrierSetC2.hpp from external C2 code. Is that ok to leave as it is?
> 
> Yes, it is fine.

Ok great! Thanks!

Can I consider the shared-compiler part reviewed by you then?

Roman


> 
> Vladimir
> 
>>
>> Cheers,
>> Roman
>>
>>
>>>
>>> I think it is fine to not use #ifdef in loopopts.cpp when you check
>>> is_ShenandoahBarrier(). And you don't do that in other files.
>>>
>>> Code in opto/macro.cpp is ugly but it is only the place so we can live
>>> with it I think.
>>>
>>> Thanks,
>>> Vladimir
>>>
>>> On 11/30/18 1:00 PM, Roman Kennke wrote:
>>>> Hi all,
>>>>
>>>> here comes round 4 of Shenandoah upstreaming review:
>>>>
>>>> This includes fixes for the issues that Per brought up:
>>>> - Verify and gracefully reject dangerous flags combinations that
>>>> disables required barriers
>>>> - Revisited @requires filters in tests
>>>> - Trim unused code from Shenandoah's SA impl
>>>> - Move ShenandoahGCTracer to gc/shenandoah
>>>> - Fix ordering of GC names in various files
>>>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>>>
>>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>>>
>>>> Thanks everybody for taking time to review this!
>>>> Roman
>>>>
>>>>> Hello all,
>>>>>
>>>>> Thanks so far for all the reviews and support!
>>>>>
>>>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>>>> Also, I fixed the numbering of my webrevs to match the
>>>>> review-round. ;-)
>>>>>
>>>>> Things we've changed today:
>>>>> - We moved shenandoah-specific code out of .ad files into our own .ad
>>>>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>>>>> requires an addition in build machinery though, see
>>>>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>>>>> - Improved zero-disabling and build-code-simplification as
>>>>> suggested by
>>>>> Magnus and Per
>>>>> - Cleaned up some leftovers in C2
>>>>> - Improved C2 loop opts code by introducing another APIs in
>>>>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>>>>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards
>>>>> now.
>>>>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>>>>> noted earlier. This stuff is Shenandoah-specific, so let's just
>>>>> call it
>>>>> that.
>>>>> - Rehashed Shenandoah tests (formatting, naming, directory layout,
>>>>> etc)
>>>>> - Rebased on jdk-12+22
>>>>>
>>>>> - Question: let us know if you need separate RFE for the new BSC2
>>>>> APIs?
>>>>>
>>>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>>>>
>>>>> Thanks,
>>>>> Roman
>>>>>
>>>>>> Alright, we fixed:
>>>>>> - The minor issues that Kim reported in shared-gc
>>>>>> - A lot of fixes in shared-tests according to Leonid's review
>>>>>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>>>>>
>>>>>> Some notes:
>>>>>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>>>>>> correct. The @requires there means to exclude runs with both CMS and
>>>>>> ExplicitGCInvokesConcurrent at the same time, because that would be
>>>>>> (expectedly) failing. It can run CMS, default GC and any other GC
>>>>>> just
>>>>>> fine. Adding the same clause for Shenandoah means the same, and
>>>>>> filters
>>>>>> the combination (+UseShenandoahGC)+(+ExplicitGCInvok

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-02 Thread Roman Kennke
Hello Vladimir,

> Why you need to include shenandoahBarrierSetC2.hpp in /opto/classes.cpp
> ? Why not include only shenandoahSupport.hpp when new nodes are defined?

I discussed this with my team. shenandoahBarrierSetC2.hpp is supposed to
be the entry point for external C2 code. No C2 code is supposed to
include shenandoahSupport.hpp, this is just an implementation detail of
shenandoahBarrierSetC2.hpp. This seems symmetrical to how ZGC includes
zBarrierSetC2.hpp from external C2 code. Is that ok to leave as it is?

Cheers,
Roman


> 
> I think it is fine to not use #ifdef in loopopts.cpp when you check
> is_ShenandoahBarrier(). And you don't do that in other files.
> 
> Code in opto/macro.cpp is ugly but it is only the place so we can live
> with it I think.
> 
> Thanks,
> Vladimir
> 
> On 11/30/18 1:00 PM, Roman Kennke wrote:
>> Hi all,
>>
>> here comes round 4 of Shenandoah upstreaming review:
>>
>> This includes fixes for the issues that Per brought up:
>> - Verify and gracefully reject dangerous flags combinations that
>> disables required barriers
>> - Revisited @requires filters in tests
>> - Trim unused code from Shenandoah's SA impl
>> - Move ShenandoahGCTracer to gc/shenandoah
>> - Fix ordering of GC names in various files
>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>
>> Thanks everybody for taking time to review this!
>> Roman
>>
>>> Hello all,
>>>
>>> Thanks so far for all the reviews and support!
>>>
>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
>>>
>>> Things we've changed today:
>>> - We moved shenandoah-specific code out of .ad files into our own .ad
>>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>>> requires an addition in build machinery though, see
>>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>>> - Improved zero-disabling and build-code-simplification as suggested by
>>> Magnus and Per
>>> - Cleaned up some leftovers in C2
>>> - Improved C2 loop opts code by introducing another APIs in
>>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards
>>> now.
>>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>>> that.
>>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>>> - Rebased on jdk-12+22
>>>
>>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>>
>>> Thanks,
>>> Roman
>>>
>>>> Alright, we fixed:
>>>> - The minor issues that Kim reported in shared-gc
>>>> - A lot of fixes in shared-tests according to Leonid's review
>>>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>>>
>>>> Some notes:
>>>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>>>> correct. The @requires there means to exclude runs with both CMS and
>>>> ExplicitGCInvokesConcurrent at the same time, because that would be
>>>> (expectedly) failing. It can run CMS, default GC and any other GC just
>>>> fine. Adding the same clause for Shenandoah means the same, and filters
>>>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>>>> made the condition a bit clearer by avoiding triple-negation.
>>>>
>>>> See:
>>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
>>>>
>>>>
>>>> Per: Disabling the SA part for heapdumping makes 2 tests fail:
>>>> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
>>>> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
>>>>
>>>> we filter them for Shenandoah now. I'm wondering: how do you get past
>>>> those with ZGC?
>>>>
>>>> See:
>>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
>>>>
>>>>
>>>> (Note to Leonid and tests reviewers: I'll add those related filters in
>>>> next round).
>>>>
>>>> Vladimir: Roland integrated a bunch of change

Re: RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-12-02 Thread Roman Kennke
Hi Vladimir,

> Much better.

Thanks!

> Why you need to include shenandoahBarrierSetC2.hpp in /opto/classes.cpp
> ? Why not include only shenandoahSupport.hpp when new nodes are defined?

For no particular reason, I guess. I'll fix it in next round.

> I think it is fine to not use #ifdef in loopopts.cpp when you check
> is_ShenandoahBarrier(). And you don't do that in other files.
> 
> Code in opto/macro.cpp is ugly but it is only the place so we can live
> with it I think.

Ok, thanks! We'll see if we can improve macro.cpp.

Thanks,
Roman



> Thanks,
> Vladimir
> 
> On 11/30/18 1:00 PM, Roman Kennke wrote:
>> Hi all,
>>
>> here comes round 4 of Shenandoah upstreaming review:
>>
>> This includes fixes for the issues that Per brought up:
>> - Verify and gracefully reject dangerous flags combinations that
>> disables required barriers
>> - Revisited @requires filters in tests
>> - Trim unused code from Shenandoah's SA impl
>> - Move ShenandoahGCTracer to gc/shenandoah
>> - Fix ordering of GC names in various files
>> - Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W
>>
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/
>>
>> Thanks everybody for taking time to review this!
>> Roman
>>
>>> Hello all,
>>>
>>> Thanks so far for all the reviews and support!
>>>
>>> I forgot to update the 'round' yesterday. We are in round 3 now :-)
>>> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
>>>
>>> Things we've changed today:
>>> - We moved shenandoah-specific code out of .ad files into our own .ad
>>> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
>>> requires an addition in build machinery though, see
>>> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
>>> - Improved zero-disabling and build-code-simplification as suggested by
>>> Magnus and Per
>>> - Cleaned up some leftovers in C2
>>> - Improved C2 loop opts code by introducing another APIs in
>>> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
>>> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards
>>> now.
>>> - We would all very much prefer to keep ShenandoahXYZNode names, as
>>> noted earlier. This stuff is Shenandoah-specific, so let's just call it
>>> that.
>>> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
>>> - Rebased on jdk-12+22
>>>
>>> - Question: let us know if you need separate RFE for the new BSC2 APIs?
>>>
>>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
>>>
>>> Thanks,
>>> Roman
>>>
>>>> Alright, we fixed:
>>>> - The minor issues that Kim reported in shared-gc
>>>> - A lot of fixes in shared-tests according to Leonid's review
>>>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>>>
>>>> Some notes:
>>>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>>>> correct. The @requires there means to exclude runs with both CMS and
>>>> ExplicitGCInvokesConcurrent at the same time, because that would be
>>>> (expectedly) failing. It can run CMS, default GC and any other GC just
>>>> fine. Adding the same clause for Shenandoah means the same, and filters
>>>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>>>> made the condition a bit clearer by avoiding triple-negation.
>>>>
>>>> See:
>>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
>>>>
>>>>
>>>> Per: Disabling the SA part for heapdumping makes 2 tests fail:
>>>> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
>>>> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
>>>>
>>>> we filter them for Shenandoah now. I'm wondering: how do you get past
>>>> those with ZGC?
>>>>
>>>> See:
>>>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
>>>>
>>>>
>>>> (Note to Leonid and tests reviewers: I'll add those related filters in
>>>> next round).
>>>>
>>>> Vladimir: Roland integrated a bunch of changes to make loop* code look
>>>> better. I can tell that we're not done with C2 yet. Can you look over
>>>> the code and see what is ok, and especially what is not ok, so that we
>>>>

RFR (round 4), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-30 Thread Roman Kennke
Hi all,

here comes round 4 of Shenandoah upstreaming review:

This includes fixes for the issues that Per brought up:
- Verify and gracefully reject dangerous flags combinations that
disables required barriers
- Revisited @requires filters in tests
- Trim unused code from Shenandoah's SA impl
- Move ShenandoahGCTracer to gc/shenandoah
- Fix ordering of GC names in various files
- Rename UINT64_FORMAT_HEX_W to UINT64_FORMAT_X_W

http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/04/

Thanks everybody for taking time to review this!
Roman

> Hello all,
> 
> Thanks so far for all the reviews and support!
> 
> I forgot to update the 'round' yesterday. We are in round 3 now :-)
> Also, I fixed the numbering of my webrevs to match the review-round. ;-)
> 
> Things we've changed today:
> - We moved shenandoah-specific code out of .ad files into our own .ad
> files under gc/shenandoah (in shenandoah-gc), how cool is that? This
> requires an addition in build machinery though, see
> make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
> - Improved zero-disabling and build-code-simplification as suggested by
> Magnus and Per
> - Cleaned up some leftovers in C2
> - Improved C2 loop opts code by introducing another APIs in
> BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
> - I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
> - We would all very much prefer to keep ShenandoahXYZNode names, as
> noted earlier. This stuff is Shenandoah-specific, so let's just call it
> that.
> - Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
> - Rebased on jdk-12+22
> 
> - Question: let us know if you need separate RFE for the new BSC2 APIs?
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
> 
> Thanks,
> Roman
> 
>> Alright, we fixed:
>> - The minor issues that Kim reported in shared-gc
>> - A lot of fixes in shared-tests according to Leonid's review
>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>
>> Some notes:
>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>> correct. The @requires there means to exclude runs with both CMS and
>> ExplicitGCInvokesConcurrent at the same time, because that would be
>> (expectedly) failing. It can run CMS, default GC and any other GC just
>> fine. Adding the same clause for Shenandoah means the same, and filters
>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>> made the condition a bit clearer by avoiding triple-negation.
>>
>> See:
>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
>>
>> Per: Disabling the SA part for heapdumping makes 2 tests fail:
>> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
>> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
>>
>> we filter them for Shenandoah now. I'm wondering: how do you get past
>> those with ZGC?
>>
>> See:
>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
>>
>> (Note to Leonid and tests reviewers: I'll add those related filters in
>> next round).
>>
>> Vladimir: Roland integrated a bunch of changes to make loop* code look
>> better. I can tell that we're not done with C2 yet. Can you look over
>> the code and see what is ok, and especially what is not ok, so that we
>> can focus our efforts on the relevant parts?
>>
>> Updated set of webrevs:
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/01/
>>
>> Thanks,
>> Roman
>>
>>
>>> Hi,
>>>
>>> This is the first round of changes for including Shenandoah GC into
>>> mainline.
>>> I divided the review into parts that roughly correspond to the mailing lists
>>> that would normally review it, and I divided it into 'shared' code
>>> changes and
>>> 'shenandoah' code changes (actually, mostly additions). The intend is to
>>> eventually
>>> push them as single 'combined' changeset, once reviewed.
>>>
>>> JEP:
>>>   https://openjdk.java.net/jeps/189
>>> Bug entry:
>>>
>>>  https://bugs.openjdk.java.net/browse/JDK-8214259
>>>
>>> Webrevs:
>>>   http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>>
>>> For those who want to see the full change, have a look at the
>>> shenandoah-complete
>>> 
>>> directory,
>>> it contains the full combined webrev. Alternatively, there is the file
>>> shenandoah-master.patch
>>> ,
>>> which is what I intend to commit (and which should be equivalent to the
>>> 'shenandoah-complete' webrev).
>>>
>>> Sections to review (at this point) are the following:
>>>  *) shenandoah-gc
>>> 
>>>     - Actual Shenandoah implementation, almost completely residing in
>>> gc/shenandoah
>>>
>>>  *) shared-gc
>>> 
>>>     - This is mostly 

Re: RFR (round 3), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-30 Thread Roman Kennke
Yes, Aleksey just implemented it. It will appear in round 4 in a bit.

Thanks,
Roman


Am 30. November 2018 20:25:35 MEZ schrieb Vladimir Kozlov 
:
>Can you do simple verification of flags and exit VM gracefully with
>error message?
>
>I think it should be fine. Crashing VM (unintentionally) is definitely
>not acceptable.
>
>Thanks,
>Vladimir
>
>On 11/30/18 1:55 AM, Roman Kennke wrote:
>>>> Hi Per,
>>>>
>>>> Thanks for taking time to look at this!
>>>>
>>>> We will take care of all the issues you mentioned.
>>>>
>>>> Regarding the flags, I think they are useful as they are now.
>Shenandoah
>>>> can be run in 'passive' mode, which doesn't involve any concurrent
>GC
>>>> (iow, it runs kindof like Parallel GC). In this mode we can
>selectively
>>>> turn on/off different kinds of barriers. This is useful:
>>>> - for debugging. if we see a crash and we suspect it's caused by a
>>>> certain type of barrier, we can turn on/off those barriers to
>narrow down
>>>> - for performance experiments: passive w/o any barriers give a good
>>>> baseline against which we can measure impact of types of barriers.
>>>>
>>>> Debugging may or may not be useful in develop mode (some bugs only
>show
>>>> up in product build), performance work quite definitely is not
>useful in
>>>> develop builds, and therefore I think it makes sense to keep them
>as
>>>> diagnostic, because that is what they are: diagnostic flags.
>>>>
>>>> Makes sense?
>>>
>>> I can see how these flags can be useful. But I think you might be in
>>> violating-spec-territory here, if you're allowing users to turn on
>flags
>>> that crash the VM. If they are safe to use in 'passive' mode, then I
>>> think there should be code in shenandoahArguments.cpp that
>>> rejects/corrects flag combinations that are unstable.
>> 
>> Let us see if we can do that.
>> 
>> If you think this violates the spec, then please point to the part
>that
>> says diagnostic (!!) options must pass the TCK and/or not crash the
>JVM?
>> I mean, it's called diagnostic for a reason. It seems counter-useful
>to
>> bury diagnostic flags that we would be using for diagnostics.
>> 
>>> I think the correct way to think about this is: We should pass the
>TCK,
>>> regardless of which features are enabled/disabled (unless the VM
>barfs
>>> at start-up because the chosen combination of flags are incompatible
>or
>>> isn't supported in the current environment, etc).
>>>
>>> CC:ing Mikael here, who might be able to shed some light on what's
>ok
>>> and not ok here.
>> 
>> Yeah, again, where is it said that diagnostic flags must even pass
>the
>> TCK? That is totally new to me.
>> 
>> Thanks,
>> Roman
>> 


Re: RFR (round 3), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-30 Thread Roman Kennke
>> Hi Per,
>>
>> Thanks for taking time to look at this!
>>
>> We will take care of all the issues you mentioned.
>>
>> Regarding the flags, I think they are useful as they are now. Shenandoah
>> can be run in 'passive' mode, which doesn't involve any concurrent GC
>> (iow, it runs kindof like Parallel GC). In this mode we can selectively
>> turn on/off different kinds of barriers. This is useful:
>> - for debugging. if we see a crash and we suspect it's caused by a
>> certain type of barrier, we can turn on/off those barriers to narrow down
>> - for performance experiments: passive w/o any barriers give a good
>> baseline against which we can measure impact of types of barriers.
>>
>> Debugging may or may not be useful in develop mode (some bugs only show
>> up in product build), performance work quite definitely is not useful in
>> develop builds, and therefore I think it makes sense to keep them as
>> diagnostic, because that is what they are: diagnostic flags.
>>
>> Makes sense?
> 
> I can see how these flags can be useful. But I think you might be in
> violating-spec-territory here, if you're allowing users to turn on flags
> that crash the VM. If they are safe to use in 'passive' mode, then I
> think there should be code in shenandoahArguments.cpp that
> rejects/corrects flag combinations that are unstable.

Let us see if we can do that.

If you think this violates the spec, then please point to the part that
says diagnostic (!!) options must pass the TCK and/or not crash the JVM?
I mean, it's called diagnostic for a reason. It seems counter-useful to
bury diagnostic flags that we would be using for diagnostics.

> I think the correct way to think about this is: We should pass the TCK,
> regardless of which features are enabled/disabled (unless the VM barfs
> at start-up because the chosen combination of flags are incompatible or
> isn't supported in the current environment, etc).
> 
> CC:ing Mikael here, who might be able to shed some light on what's ok
> and not ok here.

Yeah, again, where is it said that diagnostic flags must even pass the
TCK? That is totally new to me.

Thanks,
Roman



Re: RFR (round 3), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-30 Thread Roman Kennke
Hi Per,

Thanks for taking time to look at this!

We will take care of all the issues you mentioned.

Regarding the flags, I think they are useful as they are now. Shenandoah
can be run in 'passive' mode, which doesn't involve any concurrent GC
(iow, it runs kindof like Parallel GC). In this mode we can selectively
turn on/off different kinds of barriers. This is useful:
- for debugging. if we see a crash and we suspect it's caused by a
certain type of barrier, we can turn on/off those barriers to narrow down
- for performance experiments: passive w/o any barriers give a good
baseline against which we can measure impact of types of barriers.

Debugging may or may not be useful in develop mode (some bugs only show
up in product build), performance work quite definitely is not useful in
develop builds, and therefore I think it makes sense to keep them as
diagnostic, because that is what they are: diagnostic flags.

Makes sense?

Thanks,
Roman


> On 11/29/18 9:41 PM, Roman Kennke wrote:
> [...]
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/
> 
> Some comments (I didn't look at the compiler part):
> 
> 
> src/hotspot/share/gc/shared/barrierSetConfig.hpp
> src/hotspot/share/gc/shared/barrierSetConfig.inline.hpp
> ---
> Please insert the Shenandoah lines so that the lists remain
> alphabetically sorted on GC names.
> 
> 
> src/hotspot/share/gc/shared/gcCause.cpp
> ---
> Add the Shenandoah stuff after _dcmd_gc_run to match the order in the
> header file.
> 
> 
> src/hotspot/share/gc/shared/gcConfig.cpp
> 
> Please insert the Shenandoah lines so that the lists remain
> alphabetically sorted on GC names. Only the last change is in the right
> place now.
> 
> 
> src/hotspot/share/gc/shared/gcTrace.hpp
> ---
> Please move this new class to a gc/shenandoah/shanandoahTrace.hpp file
> (or something similar).
> 
> 
> src/hotspot/share/gc/shared/gc_globals.hpp
> src/hotspot/share/gc/shared/vmStructs_gc.hpp
> --
> Please insert the INCLUDE_SHENANDOAHGC/SHENANDOAHGC_ONLY stuff so that
> the lists remain alphabetically sorted on GC names.
> 
> 
> src/hotspot/share/utilities/globalDefinitions.hpp
> -
> I think you want to call the new macro UINT64_FORMAT_X_W, to maintain
> similarity to it's sibling UINT64_FORMAT_X. Also please adjust the
> indentation/alignment so the list of macros remains nicely structured.
> 
> 
> src/hotspot/share/utilities/macros.hpp
> --
> Please insert the Shenandoah lines so that the lists remain
> alphabetically sorted on GC names.
> 
> 
> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shenandoah/ShenandoahHeap.java
> 
> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shenandoah/ShenandoahHeapRegion.java
> 
> ---
> 
> Since the heap walking is now disabled, it looks like there's quite a
> bit of code here that can be removed.
> 
> 
> test/hotspot/jtreg/*
> 
> As Aleksey and I discussed in a separate thread. Using
> 'vm.gc.Shenandoah' does not mean the same thing as 'vm.gc ==
> "Shenandoah"', and you typically want to use 'vm.gc == "Shenandoah"' in
> most cases (I did't check all), like in tests that aren't Shenandoah
> specific.
> 
> 
> src/hotspot/share/gc/shenandoah/shenandoah_globals.hpp
> --
> I don't think you want to expose the following flags. Setting any of
> them to false will crash the VM, right? If you really want to keep them
> I'd suggest you make them develop-flags.
> 
>  346   diagnostic(bool, ShenandoahSATBBarrier, true,  \
>  347   "Turn on/off SATB barriers in Shenandoah")  \
>  348  \
>  349   diagnostic(bool, ShenandoahKeepAliveBarrier, true,  \
>  350   "Turn on/off keep alive barriers in Shenandoah")  \
>  351  \
>  352   diagnostic(bool, ShenandoahWriteBarrier, true,  \
>  353   "Turn on/off write barriers in Shenandoah")  \
>  354  \
>  355   diagnostic(bool, ShenandoahReadBarrier, true,  \
>  356   "Turn on/off read barriers in Shenandoah")  \
>  357  \
>  358   diagnostic(bool, ShenandoahStoreValEnqueueBarrier, false,  \
>  359   "Turn on/off enqueuing of oops for storeval barriers")
>  \
>  360 

RFR (round 3), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-29 Thread Roman Kennke
Hello all,

Thanks so far for all the reviews and support!

I forgot to update the 'round' yesterday. We are in round 3 now :-)
Also, I fixed the numbering of my webrevs to match the review-round. ;-)

Things we've changed today:
- We moved shenandoah-specific code out of .ad files into our own .ad
files under gc/shenandoah (in shenandoah-gc), how cool is that? This
requires an addition in build machinery though, see
make/hotspot/gensrc/GensrcAdlc.gmk (in shared-build).
- Improved zero-disabling and build-code-simplification as suggested by
Magnus and Per
- Cleaned up some leftovers in C2
- Improved C2 loop opts code by introducing another APIs in
BarrierSetC2. See the new APIs in shared-gc under BarrierSetC2.hpp.
- I don't see where it makes sense to put INCLUDE_SHENANDOAHGC guards now.
- We would all very much prefer to keep ShenandoahXYZNode names, as
noted earlier. This stuff is Shenandoah-specific, so let's just call it
that.
- Rehashed Shenandoah tests (formatting, naming, directory layout, etc)
- Rebased on jdk-12+22

- Question: let us know if you need separate RFE for the new BSC2 APIs?

http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/03/

Thanks,
Roman

> Alright, we fixed:
> - The minor issues that Kim reported in shared-gc
> - A lot of fixes in shared-tests according to Leonid's review
> - Disabled SA heapdumping similar to ZGC as Per suggested
> 
> Some notes:
> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
> correct. The @requires there means to exclude runs with both CMS and
> ExplicitGCInvokesConcurrent at the same time, because that would be
> (expectedly) failing. It can run CMS, default GC and any other GC just
> fine. Adding the same clause for Shenandoah means the same, and filters
> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
> made the condition a bit clearer by avoiding triple-negation.
> 
> See:
> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
> 
> Per: Disabling the SA part for heapdumping makes 2 tests fail:
> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
> 
> we filter them for Shenandoah now. I'm wondering: how do you get past
> those with ZGC?
> 
> See:
> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
> 
> (Note to Leonid and tests reviewers: I'll add those related filters in
> next round).
> 
> Vladimir: Roland integrated a bunch of changes to make loop* code look
> better. I can tell that we're not done with C2 yet. Can you look over
> the code and see what is ok, and especially what is not ok, so that we
> can focus our efforts on the relevant parts?
> 
> Updated set of webrevs:
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/01/
> 
> Thanks,
> Roman
> 
> 
>> Hi,
>>
>> This is the first round of changes for including Shenandoah GC into
>> mainline.
>> I divided the review into parts that roughly correspond to the mailing lists
>> that would normally review it, and I divided it into 'shared' code
>> changes and
>> 'shenandoah' code changes (actually, mostly additions). The intend is to
>> eventually
>> push them as single 'combined' changeset, once reviewed.
>>
>> JEP:
>>   https://openjdk.java.net/jeps/189
>> Bug entry:
>>
>>  https://bugs.openjdk.java.net/browse/JDK-8214259
>>
>> Webrevs:
>>   http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>
>> For those who want to see the full change, have a look at the
>> shenandoah-complete
>> 
>> directory,
>> it contains the full combined webrev. Alternatively, there is the file
>> shenandoah-master.patch
>> ,
>> which is what I intend to commit (and which should be equivalent to the
>> 'shenandoah-complete' webrev).
>>
>> Sections to review (at this point) are the following:
>>  *) shenandoah-gc
>> 
>>     - Actual Shenandoah implementation, almost completely residing in
>> gc/shenandoah
>>
>>  *) shared-gc
>> 
>>     - This is mostly boilerplate that is common to any GC
>>     - referenceProcessor.cpp has a little change to make one assert not
>> fail (next to CMS and G1)
>>     - taskqueue.hpp has some small adjustments to enable subclassing
>>
>>  *) shared-serviceability
>> 
>>     - The usual code to support another GC
>>
>>  *) shared-runtime
>> 
>>     - A number of friends declarations to allow Shenandoah iterators to
>> hook up with,
>>   e.g. ClassLoaderData, CodeCache, etc
>>     - Warning and disabling JFR LeakProfiler
>>     - 

Re: RFR: 8214476: ZGC: Build ZGC by default

2018-11-29 Thread Roman Kennke
Yes! Great! Patch looks good!

Roman


> At one point, it was decided that experimental VM features (such as ZGC)
> should be not be built by default. This policy was later
> changed/abandoned, so we should enable building of ZGC by default.
> 
> Bug: https://bugs.openjdk.java.net/browse/JDK-8214476
> Webrev: http://cr.openjdk.java.net/~pliden/8214476/webrev.0
> 
> /Per



Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-29 Thread Roman Kennke
Thanks, Per!

I will do this. Check out round 3 when it arrives. ;-)

Roman

Am 29.11.18 um 12:07 schrieb Per Liden:
> Small comment on the build changes. Instead of checking for zero variant
> here:
> 
> +  # Only enable Shenandoah on supported arches
> +  AC_MSG_CHECKING([if shenandoah can be built])
> +  if HOTSPOT_CHECK_JVM_VARIANT(zero); then
> +    DISABLED_JVM_FEATURES="$DISABLED_JVM_FEATURES shenandoahgc"
> +    AC_MSG_RESULT([no, this JVM variant not supported])
> +  else
> 
> I think you should just add shanandoahgc to the existing check, here:
> 
>   # Disable unsupported GCs for Zero
>   if HOTSPOT_CHECK_JVM_VARIANT(zero); then
>     DISABLED_JVM_FEATURES="$DISABLED_JVM_FEATURES epsilongc g1gc zgc"
>   fi
> 
> cheers,
> Per
> 
> On 11/26/18 11:47 PM, Erik Joelsson wrote:
>> Build changes look ok to me.
>>
>> /Erik
>>
>> On 2018-11-26 13:39, Roman Kennke wrote:
>>> Hi,
>>>
>>> This is the first round of changes for including Shenandoah GC into
>>> mainline.
>>> I divided the review into parts that roughly correspond to the
>>> mailing lists
>>> that would normally review it, and I divided it into 'shared' code
>>> changes and
>>> 'shenandoah' code changes (actually, mostly additions). The intend is to
>>> eventually
>>> push them as single 'combined' changeset, once reviewed.
>>>
>>> JEP:
>>>    https://openjdk.java.net/jeps/189
>>> Bug entry:
>>>
>>>   https://bugs.openjdk.java.net/browse/JDK-8214259
>>>
>>> Webrevs:
>>>    http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>>
>>> For those who want to see the full change, have a look at the
>>> shenandoah-complete
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-complete/>
>>>
>>> directory,
>>> it contains the full combined webrev. Alternatively, there is the file
>>> shenandoah-master.patch
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-master.patch>,
>>>
>>> which is what I intend to commit (and which should be equivalent to the
>>> 'shenandoah-complete' webrev).
>>>
>>> Sections to review (at this point) are the following:
>>>   *) shenandoah-gc
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-gc/>
>>>
>>>  - Actual Shenandoah implementation, almost completely residing in
>>> gc/shenandoah
>>>
>>>   *) shared-gc
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-gc/>
>>>  - This is mostly boilerplate that is common to any GC
>>>  - referenceProcessor.cpp has a little change to make one assert not
>>> fail (next to CMS and G1)
>>>  - taskqueue.hpp has some small adjustments to enable subclassing
>>>
>>>   *) shared-serviceability
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-serviceability/>
>>>
>>>  - The usual code to support another GC
>>>
>>>   *) shared-runtime
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-runtime/>
>>>
>>>  - A number of friends declarations to allow Shenandoah iterators to
>>> hook up with,
>>>    e.g. ClassLoaderData, CodeCache, etc
>>>  - Warning and disabling JFR LeakProfiler
>>>  - fieldDescriptor.hpp added is_stable() accessor, for use in
>>> Shenandoah C2 optimizations
>>>  - Locks initialization in mutexLocker.cpp as usual
>>>  - VM operations defines for Shenandoah's VM ops
>>>  - globalDefinitions.hpp added UINT64_FORMAT_HEX_W for use in
>>> Shenandoah's logging
>>>  - The usual macros in macro.hpp
>>>
>>>   *) shared-build
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-build/>
>>>
>>>  - Add shenandoah feature, enabled by default, as agreed with
>>> Vladimir K. beforehand
>>>  - Some flags for shenandoah-enabled compilation to get
>>> SUPPORT_BARRIER_ON_PRIMITIVES
>>>    and SUPPORT_NOT_TO_SPACE_INVARIANT which is required for
>>> Shenandoah's barriers
>>>  - --param inline-unit-growth=1000 settings for 2 shenandoah source
>>> files, which is
>>>    useful to get the whole marking loop inlined (observed
>>> significant
>>> regression if we
>>>    don't)
>>>
>

Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-29 Thread Roman Kennke
Thanks, Magnus,

this makes sense. I'll incorporate that simplification.

Thanks,
Roman

Am 29.11.18 um 11:53 schrieb Magnus Ihse Bursie:
> On 2018-11-26 23:47, Erik Joelsson wrote:
> 
>> Build changes look ok to me.
> I agree, but I'd like to point out a (possible) style improvement. In
> hotspot.m4,
> 
> if test "x$OPENJDK_TARGET_CPU" = "xx86_64" || test
> "x$OPENJDK_TARGET_CPU" = "xaarch64" || test "x$OPENJDK_TARGET_CPU" ==
> "xx86"; then
> 
> can be more succinctly expressed as
> 
> if test "x$OPENJDK_TARGET_CPU_ARCH" = "xx86" || test
> "x$OPENJDK_TARGET_CPU" = "xaarch64" ; then
> 
> 
> But it's a matter of preference, if you do not want to change. Otherwise
> it looks good!
> 
> /Magnus
>>
>> /Erik
>>
>> On 2018-11-26 13:39, Roman Kennke wrote:
>>> Hi,
>>>
>>> This is the first round of changes for including Shenandoah GC into
>>> mainline.
>>> I divided the review into parts that roughly correspond to the
>>> mailing lists
>>> that would normally review it, and I divided it into 'shared' code
>>> changes and
>>> 'shenandoah' code changes (actually, mostly additions). The intend is to
>>> eventually
>>> push them as single 'combined' changeset, once reviewed.
>>>
>>> JEP:
>>>    https://openjdk.java.net/jeps/189
>>> Bug entry:
>>>
>>>   https://bugs.openjdk.java.net/browse/JDK-8214259
>>>
>>> Webrevs:
>>>    http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>>
>>> For those who want to see the full change, have a look at the
>>> shenandoah-complete
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-complete/>
>>>
>>> directory,
>>> it contains the full combined webrev. Alternatively, there is the file
>>> shenandoah-master.patch
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-master.patch>,
>>>
>>> which is what I intend to commit (and which should be equivalent to the
>>> 'shenandoah-complete' webrev).
>>>
>>> Sections to review (at this point) are the following:
>>>   *) shenandoah-gc
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-gc/>
>>>
>>>  - Actual Shenandoah implementation, almost completely residing in
>>> gc/shenandoah
>>>
>>>   *) shared-gc
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-gc/>
>>>  - This is mostly boilerplate that is common to any GC
>>>  - referenceProcessor.cpp has a little change to make one assert not
>>> fail (next to CMS and G1)
>>>  - taskqueue.hpp has some small adjustments to enable subclassing
>>>
>>>   *) shared-serviceability
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-serviceability/>
>>>
>>>  - The usual code to support another GC
>>>
>>>   *) shared-runtime
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-runtime/>
>>>
>>>  - A number of friends declarations to allow Shenandoah iterators to
>>> hook up with,
>>>    e.g. ClassLoaderData, CodeCache, etc
>>>  - Warning and disabling JFR LeakProfiler
>>>  - fieldDescriptor.hpp added is_stable() accessor, for use in
>>> Shenandoah C2 optimizations
>>>  - Locks initialization in mutexLocker.cpp as usual
>>>  - VM operations defines for Shenandoah's VM ops
>>>  - globalDefinitions.hpp added UINT64_FORMAT_HEX_W for use in
>>> Shenandoah's logging
>>>  - The usual macros in macro.hpp
>>>
>>>   *) shared-build
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-build/>
>>>
>>>  - Add shenandoah feature, enabled by default, as agreed with
>>> Vladimir K. beforehand
>>>  - Some flags for shenandoah-enabled compilation to get
>>> SUPPORT_BARRIER_ON_PRIMITIVES
>>>    and SUPPORT_NOT_TO_SPACE_INVARIANT which is required for
>>> Shenandoah's barriers
>>>  - --param inline-unit-growth=1000 settings for 2 shenandoah source
>>> files, which is
>>>    useful to get the whole marking loop inlined (observed
>>> significant
>>> regression if we
>>>    don't)
>>>
>>>   *) shared-tests
>>> <http://cr.op

Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-29 Thread Roman Kennke
Hi Jini,

> The SA tests which iterate over the heap (ClhsdbJhisto.java,
> TestHeapDumpFor*.java) fail if ZGC is passed as an option to these also
> (Am planning on fixing these). It might be better to avoid invoking
> those tests for Shenandoah now with an @requires tag.

Thanks for clarifying. Yeah, we already have a fix in shenandoah/jdk and
it will appear in next round of reviews.

Thanks,
Roman

> Thank you,
> Jini
> 
> On 11/29/2018 3:59 AM, Roman Kennke wrote:
>> Alright, we fixed:
>> - The minor issues that Kim reported in shared-gc
>> - A lot of fixes in shared-tests according to Leonid's review
>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>
>> Some notes:
>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>> correct. The @requires there means to exclude runs with both CMS and
>> ExplicitGCInvokesConcurrent at the same time, because that would be
>> (expectedly) failing. It can run CMS, default GC and any other GC just
>> fine. Adding the same clause for Shenandoah means the same, and filters
>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>> made the condition a bit clearer by avoiding triple-negation.
>>
>> See:
>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
>>
>>
>> Per: Disabling the SA part for heapdumping makes 2 tests fail:
>> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
>> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
>>
>> we filter them for Shenandoah now. I'm wondering: how do you get past
>> those with ZGC?
>>
>> See:
>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
>>
>>
>> (Note to Leonid and tests reviewers: I'll add those related filters in
>> next round).
>>
>> Vladimir: Roland integrated a bunch of changes to make loop* code look
>> better. I can tell that we're not done with C2 yet. Can you look over
>> the code and see what is ok, and especially what is not ok, so that we
>> can focus our efforts on the relevant parts?
>>
>> Updated set of webrevs:
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/01/
>>
>> Thanks,
>> Roman
>>
>>
>>> Hi,
>>>
>>> This is the first round of changes for including Shenandoah GC into
>>> mainline.
>>> I divided the review into parts that roughly correspond to the
>>> mailing lists
>>> that would normally review it, and I divided it into 'shared' code
>>> changes and
>>> 'shenandoah' code changes (actually, mostly additions). The intend is to
>>> eventually
>>> push them as single 'combined' changeset, once reviewed.
>>>
>>> JEP:
>>>    https://openjdk.java.net/jeps/189
>>> Bug entry:
>>>
>>>   https://bugs.openjdk.java.net/browse/JDK-8214259
>>>
>>> Webrevs:
>>>    http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>>
>>> For those who want to see the full change, have a look at the
>>> shenandoah-complete
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-complete/>
>>>
>>> directory,
>>> it contains the full combined webrev. Alternatively, there is the file
>>> shenandoah-master.patch
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-master.patch>,
>>>
>>> which is what I intend to commit (and which should be equivalent to the
>>> 'shenandoah-complete' webrev).
>>>
>>> Sections to review (at this point) are the following:
>>>   *) shenandoah-gc
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-gc/>
>>>
>>>  - Actual Shenandoah implementation, almost completely residing in
>>> gc/shenandoah
>>>
>>>   *) shared-gc
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-gc/>
>>>  - This is mostly boilerplate that is common to any GC
>>>  - referenceProcessor.cpp has a little change to make one assert not
>>> fail (next to CMS and G1)
>>>  - taskqueue.hpp has some small adjustments to enable subclassing
>>>
>>>   *) shared-serviceability
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-serviceability/>
>>>
>>>  - The usual code to support another GC
>>>
>>>   *) shared-runtime
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shar

Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-29 Thread Roman Kennke
Hi Vladimir,

thanks for reviewing!

> .ad files. I am thinking may be we should put new code in separate
> _shenandoah.ad files and merge them only if Shenandoah is included
> in built [1]. I don't like #ifdefs - you still generate this mach nodes
> even without Shenandoah.

We're looking into improving the situation.

> debug_final_field_at() and debug_stable_field_at() are not used - could
> be excluded from changes.

Those are used from shenandoahBarrierSetC2.cpp and related code in in
the shenandoah-gc or shenandoah-complete webrevs.

> loopnode.cpp Sometimes you add #if INCLUDE_SHENANDOAHGC and sometimes
> don't. Be consistent. I don't know why you need #ifdef if such nodes
> should be generated without Shenandoah

We'll look into improving that.

> compile.cpp - no need to include shenandoahBarrierSetC2.hpp

Right. Will fix it.

> type.hpp - cast_to_nonconst() are not used.

As above, this is used from shenandoahBarrierSetC2.cpp

> phasetype.hpp, subnode.cpp - don't add files with only cleanup changes.

right. Those are leftovers and I'll remove them.

> I wish loopnode.cpp and loopopts.cpp changes were more clean.

We'll see what we can do. Some of those could possibly be turned into
some sort of GC interface, but 1. it would be very Shenandoah-specific
and 2. require significant duplication of code.

> I like the idea where you expose only one ShenandoahBarrier node to C2
> and hide all subnodes behind it. I really don't want to see a lot of
> Shenandoah*Node new classes in C2 shared code.

I understand that. We have ShenandoahBarrierNode superclass and a bunch
of ShenandoahReadBarrier, WriteBarrier and some other nodes 'behind'
this. The crux is probably all the various ShenandoahCompareAndSwap etc
variants, which need to be subclasses of their
non-Shenandoah-counterparts, and I don't see how to change that.

Thanks for reviewing and helping!
Roman

> 
> [1]
> http://hg.openjdk.java.net/jdk/jdk/file/157c1130b46e/make/hotspot/gensrc/GensrcAdlc.gmk#l122
> 
> 
> On 11/28/18 2:29 PM, Roman Kennke wrote:
>> Alright, we fixed:
>> - The minor issues that Kim reported in shared-gc
>> - A lot of fixes in shared-tests according to Leonid's review
>> - Disabled SA heapdumping similar to ZGC as Per suggested
>>
>> Some notes:
>> Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
>> correct. The @requires there means to exclude runs with both CMS and
>> ExplicitGCInvokesConcurrent at the same time, because that would be
>> (expectedly) failing. It can run CMS, default GC and any other GC just
>> fine. Adding the same clause for Shenandoah means the same, and filters
>> the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
>> made the condition a bit clearer by avoiding triple-negation.
>>
>> See:
>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html
>>
>>
>> Per: Disabling the SA part for heapdumping makes 2 tests fail:
>> - test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
>> - test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java
>>
>> we filter them for Shenandoah now. I'm wondering: how do you get past
>> those with ZGC?
>>
>> See:
>> http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html
>>
>>
>> (Note to Leonid and tests reviewers: I'll add those related filters in
>> next round).
>>
>> Vladimir: Roland integrated a bunch of changes to make loop* code look
>> better. I can tell that we're not done with C2 yet. Can you look over
>> the code and see what is ok, and especially what is not ok, so that we
>> can focus our efforts on the relevant parts?
>>
>> Updated set of webrevs:
>> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/01/
>>
>> Thanks,
>> Roman
>>
>>
>>> Hi,
>>>
>>> This is the first round of changes for including Shenandoah GC into
>>> mainline.
>>> I divided the review into parts that roughly correspond to the
>>> mailing lists
>>> that would normally review it, and I divided it into 'shared' code
>>> changes and
>>> 'shenandoah' code changes (actually, mostly additions). The intend is to
>>> eventually
>>> push them as single 'combined' changeset, once reviewed.
>>>
>>> JEP:
>>>    https://openjdk.java.net/jeps/189
>>> Bug entry:
>>>
>>>   https://bugs.openjdk.java.net/browse/JDK-8214259
>>>
>>> Webrevs:
>>>    http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>>
>>> For those who want to see the full change, have a look at the
>>> shenandoah-c

Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-28 Thread Roman Kennke
Alright, we fixed:
- The minor issues that Kim reported in shared-gc
- A lot of fixes in shared-tests according to Leonid's review
- Disabled SA heapdumping similar to ZGC as Per suggested

Some notes:
Leonid:  test/hotspot/jtreg/gc/TestFullGCCount.java was actually
correct. The @requires there means to exclude runs with both CMS and
ExplicitGCInvokesConcurrent at the same time, because that would be
(expectedly) failing. It can run CMS, default GC and any other GC just
fine. Adding the same clause for Shenandoah means the same, and filters
the combination (+UseShenandoahGC)+(+ExplicitGCInvokesConcurrent). I
made the condition a bit clearer by avoiding triple-negation.

See:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008457.html

Per: Disabling the SA part for heapdumping makes 2 tests fail:
- test/hotspot/jtreg/serviceability/sa/ClhsdbJhisto.java
- test/hotspot/jtreg/serviceability/sa/TestHeapDumpForLargeArray.java

we filter them for Shenandoah now. I'm wondering: how do you get past
those with ZGC?

See:
http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008466.html

(Note to Leonid and tests reviewers: I'll add those related filters in
next round).

Vladimir: Roland integrated a bunch of changes to make loop* code look
better. I can tell that we're not done with C2 yet. Can you look over
the code and see what is ok, and especially what is not ok, so that we
can focus our efforts on the relevant parts?

Updated set of webrevs:
http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/01/

Thanks,
Roman


> Hi,
> 
> This is the first round of changes for including Shenandoah GC into
> mainline.
> I divided the review into parts that roughly correspond to the mailing lists
> that would normally review it, and I divided it into 'shared' code
> changes and
> 'shenandoah' code changes (actually, mostly additions). The intend is to
> eventually
> push them as single 'combined' changeset, once reviewed.
> 
> JEP:
>   https://openjdk.java.net/jeps/189
> Bug entry:
> 
>  https://bugs.openjdk.java.net/browse/JDK-8214259
> 
> Webrevs:
>   http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
> 
> For those who want to see the full change, have a look at the
> shenandoah-complete
> 
> directory,
> it contains the full combined webrev. Alternatively, there is the file
> shenandoah-master.patch
> ,
> which is what I intend to commit (and which should be equivalent to the
> 'shenandoah-complete' webrev).
> 
> Sections to review (at this point) are the following:
>  *) shenandoah-gc
> 
>     - Actual Shenandoah implementation, almost completely residing in
> gc/shenandoah
> 
>  *) shared-gc
> 
>     - This is mostly boilerplate that is common to any GC
>     - referenceProcessor.cpp has a little change to make one assert not
> fail (next to CMS and G1)
>     - taskqueue.hpp has some small adjustments to enable subclassing
> 
>  *) shared-serviceability
> 
>     - The usual code to support another GC
> 
>  *) shared-runtime
> 
>     - A number of friends declarations to allow Shenandoah iterators to
> hook up with,
>   e.g. ClassLoaderData, CodeCache, etc
>     - Warning and disabling JFR LeakProfiler
>     - fieldDescriptor.hpp added is_stable() accessor, for use in
> Shenandoah C2 optimizations
>     - Locks initialization in mutexLocker.cpp as usual
>     - VM operations defines for Shenandoah's VM ops
>     - globalDefinitions.hpp added UINT64_FORMAT_HEX_W for use in
> Shenandoah's logging
>     - The usual macros in macro.hpp
> 
>  *) shared-build
> 
>     - Add shenandoah feature, enabled by default, as agreed with
> Vladimir K. beforehand
>     - Some flags for shenandoah-enabled compilation to get
> SUPPORT_BARRIER_ON_PRIMITIVES
>   and SUPPORT_NOT_TO_SPACE_INVARIANT which is required for
> Shenandoah's barriers
>     - --param inline-unit-growth=1000 settings for 2 shenandoah source
> files, which is
>   useful to get the whole marking loop inlined (observed significant
> regression if we
>   don't)
> 
>  *) shared-tests
> 
>     - Test infrastructure to support Shenandoah
>     - Shenandoah test groups
>     - Exclude Shenandoah in various tests that can be run with selected GC
>     - Enable/add configure for Shenandoah for tests that make sense to
> run with it
> 
>  *) shenandoah-tests
> 

Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-28 Thread Roman Kennke
s/TestAlignmentToUseLargePages.java.sdiff.html
> 
> I think 
> 56  * @requires vm.gc=="null" & !vm.graal.enabled
> should be something like @requires vm.gc.Shenandoah & !vm.graal.enabled

Yes. Fixed them.

> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/test/hotspot/jtreg/gc/arguments/TestUseCompressedOopsErgo.java.sdiff.html
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/test/hotspot/jtreg/gc/class_unloading/TestClassUnloadingDisabled.java.sdiff.html
> 
> The same for  62  * @requires vm.gc=="null" & !vm.graal.enabled
> and
> 72  * @requires vm.gc=="null" & !vm.graal.enabled
> 

Fixed them too.

> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/test/hotspot/jtreg/gc/ergonomics/TestDynamicNumberOfGCThreads.java.sdiff.html
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/test/hotspot/jtreg/gc/logging/TestGCId.java.sdiff.html
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/test/hotspot/jtreg/gc/metaspace/TestMetaspacePerfCounters.java.sdiff.html
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java.sdiff.html
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/test/hotspot/jtreg/runtime/MemberName/MemberNameLeak.java.sdiff.html
> 
> http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/test/hotspot/jtreg/gc/metaspace/TestMetaspacePerfCounters.java.sdiff.html
> Also these tests are going to be run with all GC and fails if Shenandoah
> is not supported.

Right. I fixed those and a few others I have found. Those which drive
Shenandoah at runtime now have a runtime check
(GC.Shenandoah.isSupported() which uses WB). Others have split test
sections.

I'll upload round 2 of review changesets, which contains the fixes in a bit.

Thanks a *LOT* for detailed review.

Roman


> Leonid
> 
> [1] http://openjdk.java.net/jtreg/tag-spec.html
> 
> On 11/26/18 1:39 PM, Roman Kennke wrote:
>>
>> Hi,
>>
>> This is the first round of changes for including Shenandoah GC into
>> mainline.
>> I divided the review into parts that roughly correspond to the mailing
>> lists
>> that would normally review it, and I divided it into 'shared' code
>> changes and
>> 'shenandoah' code changes (actually, mostly additions). The intend is
>> to eventually
>> push them as single 'combined' changeset, once reviewed.
>>
>> JEP:
>>   https://openjdk.java.net/jeps/189
>> Bug entry:
>>
>>  https://bugs.openjdk.java.net/browse/JDK-8214259
>>
>> Webrevs:
>>   http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>
>> For those who want to see the full change, have a look at the
>> shenandoah-complete
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-complete/>
>> directory,
>> it contains the full combined webrev. Alternatively, there is the file
>> shenandoah-master.patch
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-master.patch>,
>> which is what I intend to commit (and which should be equivalent to
>> the 'shenandoah-complete' webrev).
>>
>> Sections to review (at this point) are the following:
>>  *) shenandoah-gc
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-gc/>
>>     - Actual Shenandoah implementation, almost completely residing in
>> gc/shenandoah
>>
>>  *) shared-gc
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-gc/>
>>     - This is mostly boilerplate that is common to any GC
>>     - referenceProcessor.cpp has a little change to make one assert
>> not fail (next to CMS and G1)
>>     - taskqueue.hpp has some small adjustments to enable subclassing
>>
>>  *) shared-serviceability
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-serviceability/>
>>     - The usual code to support another GC
>>
>>  *) shared-runtime
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-runtime/>
>>     - A number of friends declarations to allow Shenandoah iterators
>> to hook up with,
>>   e.g. ClassLoaderData, CodeCache, etc
>>     - Warning and disabling JFR LeakProfiler
>>     - fieldDescriptor.hpp added is_stable() accessor, for use in
>> Shenandoah C2 optimizations
>>     - Locks initialization in mutexLocker.cpp as usual
>>     - VM operations defines for Shenandoah's VM ops
>>     - globalDefinitions.hpp added UINT64_

Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-28 Thread Roman Kennke
>> Hi Per,
>>
>>> Hi Roman,
>>>
>>> On 11/26/18 10:39 PM, Roman Kennke wrote:
>>> [...]
>>>>    *) shared-serviceability
>>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-serviceability/>
>>>>
>>>>
>>>>   - The usual code to support another GC
>>>
>>> Just had a quick look at the SA part. I was thinking you'd have the same
>>> problem as ZGC here, with regards to parsing the heap and potentially
>>> reading garbage when you step on a Klass* which had been unloaded?
>>
>> Possible. I am myself not very familiar with SA. I guess it depends on
>> how SA does it: if it iterates objects via CH::object_iterate() (e.g.
>> same entry point as, e.g., heap-dumping code), then we should be fine.
>> We're kicking off a traversal rather than straight scan there. If
>> however SA somehow makes a raw scan itself, then we'd have the problem
>> you describe.
> 
> The SA does a raw scan itself, which is the root of the problem.
> ObejctHeap.iterateLiveRegions() will locate the first object in a region
> by doing
> 
>   OopHandle handle = bottom.addOffsetToAsOopHandle(0);
> 
> and to get the next object it does
> 
>   handle.addOffsetToAsOopHandle(obj.getObjectSize());
> 
> and you'll crash. So I'm afraid this will not work for Shenandoah either.

Alright. I'll 'disable' it like you did with ZGC then. Thanks for
pointing it out.

I'm wondering: this would crash with G1 and
+ClassUnloadingWithConcurrentMark too, then?

Roman




Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-28 Thread Roman Kennke
Hi Per,

> Hi Roman,
> 
> On 11/26/18 10:39 PM, Roman Kennke wrote:
> [...]
>>   *) shared-serviceability
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-serviceability/>
>>
>>  - The usual code to support another GC
> 
> Just had a quick look at the SA part. I was thinking you'd have the same
> problem as ZGC here, with regards to parsing the heap and potentially
> reading garbage when you step on a Klass* which had been unloaded?

Possible. I am myself not very familiar with SA. I guess it depends on
how SA does it: if it iterates objects via CH::object_iterate() (e.g.
same entry point as, e.g., heap-dumping code), then we should be fine.
We're kicking off a traversal rather than straight scan there. If
however SA somehow makes a raw scan itself, then we'd have the problem
you describe.

Roman




Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-28 Thread Roman Kennke
Hi Kim,

>> On Nov 26, 2018, at 4:39 PM, Roman Kennke  wrote:
>>  *) shared-gc
>> - This is mostly boilerplate that is common to any GC
>> - referenceProcessor.cpp has a little change to make one assert not fail 
>> (next to CMS and G1)
>> - taskqueue.hpp has some small adjustments to enable subclassing
> 
> I've reviewed the shared-gc webrev.  I only found a few trivial nits.
> 
> --
>  
> src/hotspot/share/gc/shared/gcName.hpp
>   42   NA,
>   43   Shenandoah,
> 
> Putting Shenandoah after NA seems odd.
> 
> --
> src/hotspot/share/gc/shared/gcConfig.cpp
>   63  CMSGC_ONLY(static CMSArguments  cmsArguments;)
> ...
>   69 SHENANDOAHGC_ONLY(static ShenandoahArguments shenandoahArguments;)
> 
> Code alignment should probably be updated.
> 
> Similarly here:
>   73 static const SupportedGC SupportedGCs[] = {
> ...
>   79 SHENANDOAHGC_ONLY_ARG(SupportedGC(UseShenandoahGC,
> CollectedHeap::Shenandoah, shenandoahArguments, "shenandoah gc"))
> 
> and here:
>   97 void GCConfig::fail_if_unsupported_gc_is_selected() {
> ...
>  105   NOT_SHENANDOAHGC(FAIL_IF_SELECTED(UseShenandoahGC,  true));
> 
> --
> src/hotspot/share/gc/shared/collectedHeap.hpp
>   92 //   ShenandoahHeap
> 
> Moving it after ParallelScavengeHeap would give a better order.
> 
> --
> src/hotspot/share/gc/shared/barrierSetConfig.hpp
>   36   SHENANDOAHGC_ONLY(f(Shenandoah))
> 
> Why is this "Shenandoah" while all the others are "BarrierSet"?
> 

Don't know. I'll change it to be consistent. I have proposed a patch
with all changes to shenandoah-dev:

http://mail.openjdk.java.net/pipermail/shenandoah-dev/2018-November/008420.html

Will post an updated upstreaming webrev soon.

Thanks for reviewing,
 Roman



Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-28 Thread Roman Kennke
Hi Kim,

>> On Nov 27, 2018, at 4:46 AM, Roman Kennke  wrote:
>>
>> Hi Kim,
>>
>>>> Sections to review (at this point) are the following:
>>>>
>>>> *) shared-gc
>>>>- This is mostly boilerplate that is common to any GC
>>>>- referenceProcessor.cpp has a little change to make one assert not 
>>>> fail (next to CMS and G1)
>>>
>>> Ick.  But I don’t have a better suggestion that doesn’t involve adding a 
>>> new API
>>> to CollectedHeap for use by this assertion, which seems a bit excessive if 
>>> there
>>> are no other uses.
>>
>> Yeah.
>> I guess we could add a config _discovery_is_concurrent or similar in RP,
>> and check that. Or maybe one of _discovery_is_mt or _discovery_is_atomic
>> already covers that? I couldn't immediately tell/100% understand their
>> semantics. Seems worthy to look at after this?
> 
> It might be equivalent to _discovery_is_atomic; I don’t remember the exact
> semantics right now.  I think it’s unrelated to _discovery_is_mt.
> 
> Yes, looking at this later is fine.  Please file an RFE.

Ok, done:
https://bugs.openjdk.java.net/browse/JDK-8214441

>>>>- taskqueue.hpp has some small adjustments to enable subclassing
>>>
>>> Why this change instead of JDK-8204947?  As the description from that RFE 
>>> says:
>>> "The ShenandoahTaskTerminator from the Shenandoah project is a much better 
>>> implementation of a task terminator.”
>>
>> Yeah, see Zhengyu's comment. Let's ignore those changes for this review
>> (for now), expect our impl ported to taskqueue.hpp/cpp soon.
> 
> I’m okay with that plan.  Maybe add a comment in JDK-8204947 about this?

https://bugs.openjdk.java.net/browse/JDK-8204947?focusedCommentId=14226307=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14226307

Thanks for reviewing,
 Roman



Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-27 Thread Roman Kennke
>>> You need to check if shenandoahgc is disabled first - check
>>> DISABLED_JVM_FEATURES list (see code for jvmci).
>>
>> Why? If Shenandoah is disabled, it will be on this list, and therefore
>> not be built (see JvmFeatures.gmk).
> 
> Did you test with --with-jvm-variants=-shenandoahgc to make sure it is
> disabled?

Test: Built the Shenandoah-patched tree with
--with-jvm-features=-shenandoahgc. Then:

java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC --version
Error occurred during initialization of VM
Option -XX:+UseShenandoahGC not supported

Also, we regularily build minimal and zero, and those disable Shenandoah
too, that's how we know this works reliably.

Thanks,
Roman



Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-27 Thread Roman Kennke
Hi Vladimir,

> On 11/27/18 1:37 AM, Roman Kennke wrote:
>> Hi Vladimir,
>>
>>> You need to check if shenandoahgc is disabled first - check
>>> DISABLED_JVM_FEATURES list (see code for jvmci).
>>
>> Why? If Shenandoah is disabled, it will be on this list, and therefore
>> not be built (see JvmFeatures.gmk).
> 
> Did you test with --with-jvm-variants=-shenandoahgc to make sure it is
> disabled?

I tested with --with-jvm-features=-shenandoahgc (and will do so again
right after finishing this email, just to be sure). (Note: shenandoahgc
is a feature, not a variant).

> May be I needed explicitly check jvmci because I need it early to check
> it for enable Graal build. So it is different from your case.

Probably.

>>> Do you support 32-bit x86? You have OPENJDK_TARGET_CPU == xx86 check.
>>> Do you support all x86 OSs?
>>
>> We can build with 32bit, but it will not run. It seems indeed curious to
>> enable it. It's probably only there to allow 32bit builds with
>> Shenandoah enabled, just to verify that we have all the relevant LP64
>> checks in code in place. I will check it with my collegues.
> 
> What about OS? Do you support Windows, MacOS? I did not see any OS
> specific changes. So may be it is alright.

Shenandoah is OS agnostic and we compile + run it on Windows
successfully. Adopters tell us it's fine on MacOS too.

>>
>>> Why you set VM CFLAGS when shenandoahgc is not enabled? It is in
>>> JvmFeatures.gmk.
>>
>> This *disables* and excludes Shenandoah if not enabled.
>>
>> +ifneq ($(call check-jvm-feature, shenandoahgc), true)
>> +  JVM_CFLAGS_FEATURES += -DINCLUDE_SHENANDOAHGC=0
>> +  JVM_EXCLUDE_PATTERNS += gc/shenandoah
>>
>> pretty much the same pattern as zgc and other features.
>>
>> and then we add some flags when Shenandoah is enabled:
>>
>> +else
>> +  JVM_CFLAGS_FEATURES += -DSUPPORT_BARRIER_ON_PRIMITIVES
>> -DSUPPORT_NOT_TO_SPACE_INVARIANT
>> +endif
>>
>> ... which are required for building with Shenandoah enabled.
> 
> My bad. Somehow I thought it was reverse. Too much coffee at the
> morning. :(

Yeah, maybe it's written the wrong way around (double-negation), but
that's how it is for other similar blocks too.

> Looks good.

Thanks!

>>> I looked on C2 changes. It has INCLUDE_SHENANDOAHGC, checks and new gc
>>> specific nodes. This looks intrusive. I hope you clean it up.
>>
>> There are a few places with INCLUDE_SHENANDOAHGC checks where it seemed
>> excessive to add a whole API just for a tiny, very GC specific check. We
>> still do have intrusive changes in loop*, which we are working on to
>> resolve. We declare+define all our GC specific nodes in
>> shenandoahBarrierSetC2.hpp and related Shenandoah-specific files. The
>> only things in shared code is the usual declarations in classes.hpp/cpp
>> and node.hpp to get is_ShenandoahBarrier() an similar queries. This
>> seems in-line with what other GCs do (look for e.g. LoadBarrier). Please
>> be specific which changes in opto you'd like to see resolved (and don't
>> look at loop* source files at this point ;-) ).
> 
> Is it possible to reuse LoadBarrier by adding GC specific flag to it?
> I am not against adding new nodes if really needed. My concern is about
> using GC name in node's names.

That would be very weird. Shenandoah's barrier is *not* a load barrier.
GC's names in node name makes sense (to me) because the generated code
is GC specific, and it's used in .ad files to match. I guess we could
give it a more generic names (ReadBarrier, WriteBarrier,
GCCompareAndSwap, ..) and then match them in .ad and call via
BarrierSetAssembler to generate GC specific code. But it seems odd and
hard to read+understand to me.

> Yes, I am fine with very few INCLUDE_SHENANDOAHGC.

Ok. Let's see how it looks like after Roland's latest changes are in.

> Thanks,
> Vladimir

Thanks for reviewing!

Roman

>> Thanks for reviewing!
>> Roman
>>
>>> Thanks,
>>> Vladimir
>>>
>>> On 11/26/18 2:47 PM, Erik Joelsson wrote:
>>>> Build changes look ok to me.
>>>>
>>>> /Erik
>>>>
>>>> On 2018-11-26 13:39, Roman Kennke wrote:
>>>>> Hi,
>>>>>
>>>>> This is the first round of changes for including Shenandoah GC into
>>>>> mainline.
>>>>> I divided the review into parts that roughly correspond to the
>>>>> mailing lists
>>>>> that would normally review it, and I divided it into 'shared' code
>>>>> changes and
>>>>> 'shenandoah' code changes (actually,

Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-27 Thread Roman Kennke
> Do you support 32-bit x86? You have OPENJDK_TARGET_CPU == xx86 check.
> 
> Do you support all x86 OSs?

We enable this because we can actually run on such hardware. We fall
back to 'passive' mode though, which means Shenandoah acts more or less
like Parallel GC, and doesn't involve any barriers at all, and doesn't
do any concurrent processing. This has been useful for footprint
experiments.

Roman



> Why you set VM CFLAGS when shenandoahgc is not enabled? It is in
> JvmFeatures.gmk.
> 
> I looked on C2 changes. It has INCLUDE_SHENANDOAHGC, checks and new gc
> specific nodes. This looks intrusive. I hope you clean it up.
> 
> Thanks,
> Vladimir
> 
> On 11/26/18 2:47 PM, Erik Joelsson wrote:
>> Build changes look ok to me.
>>
>> /Erik
>>
>> On 2018-11-26 13:39, Roman Kennke wrote:
>>> Hi,
>>>
>>> This is the first round of changes for including Shenandoah GC into
>>> mainline.
>>> I divided the review into parts that roughly correspond to the
>>> mailing lists
>>> that would normally review it, and I divided it into 'shared' code
>>> changes and
>>> 'shenandoah' code changes (actually, mostly additions). The intend is to
>>> eventually
>>> push them as single 'combined' changeset, once reviewed.
>>>
>>> JEP:
>>>    https://openjdk.java.net/jeps/189
>>> Bug entry:
>>>
>>>   https://bugs.openjdk.java.net/browse/JDK-8214259
>>>
>>> Webrevs:
>>>    http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>>
>>> For those who want to see the full change, have a look at the
>>> shenandoah-complete
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-complete/>
>>>
>>> directory,
>>> it contains the full combined webrev. Alternatively, there is the file
>>> shenandoah-master.patch
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-master.patch>,
>>>
>>> which is what I intend to commit (and which should be equivalent to the
>>> 'shenandoah-complete' webrev).
>>>
>>> Sections to review (at this point) are the following:
>>>   *) shenandoah-gc
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-gc/>
>>>
>>>  - Actual Shenandoah implementation, almost completely residing in
>>> gc/shenandoah
>>>
>>>   *) shared-gc
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-gc/>
>>>  - This is mostly boilerplate that is common to any GC
>>>  - referenceProcessor.cpp has a little change to make one assert not
>>> fail (next to CMS and G1)
>>>  - taskqueue.hpp has some small adjustments to enable subclassing
>>>
>>>   *) shared-serviceability
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-serviceability/>
>>>
>>>  - The usual code to support another GC
>>>
>>>   *) shared-runtime
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-runtime/>
>>>
>>>  - A number of friends declarations to allow Shenandoah iterators to
>>> hook up with,
>>>    e.g. ClassLoaderData, CodeCache, etc
>>>  - Warning and disabling JFR LeakProfiler
>>>  - fieldDescriptor.hpp added is_stable() accessor, for use in
>>> Shenandoah C2 optimizations
>>>  - Locks initialization in mutexLocker.cpp as usual
>>>  - VM operations defines for Shenandoah's VM ops
>>>  - globalDefinitions.hpp added UINT64_FORMAT_HEX_W for use in
>>> Shenandoah's logging
>>>  - The usual macros in macro.hpp
>>>
>>>   *) shared-build
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-build/>
>>>
>>>  - Add shenandoah feature, enabled by default, as agreed with
>>> Vladimir K. beforehand
>>>  - Some flags for shenandoah-enabled compilation to get
>>> SUPPORT_BARRIER_ON_PRIMITIVES
>>>    and SUPPORT_NOT_TO_SPACE_INVARIANT which is required for
>>> Shenandoah's barriers
>>>  - --param inline-unit-growth=1000 settings for 2 shenandoah source
>>> files, which is
>>>    useful to get the whole marking loop inlined (observed
>>> significant
>>> regression if we
>>>    don't)
>>>
>>>   *) shared-tests
>>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/>
>>>
>&g

Re: RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-27 Thread Roman Kennke
Thanks for reviewing, Erik!

Roman


> Build changes look ok to me.
> 
> /Erik
> 
> On 2018-11-26 13:39, Roman Kennke wrote:
>> Hi,
>>
>> This is the first round of changes for including Shenandoah GC into
>> mainline.
>> I divided the review into parts that roughly correspond to the mailing
>> lists
>> that would normally review it, and I divided it into 'shared' code
>> changes and
>> 'shenandoah' code changes (actually, mostly additions). The intend is to
>> eventually
>> push them as single 'combined' changeset, once reviewed.
>>
>> JEP:
>>    https://openjdk.java.net/jeps/189
>> Bug entry:
>>
>>   https://bugs.openjdk.java.net/browse/JDK-8214259
>>
>> Webrevs:
>>    http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/
>>
>> For those who want to see the full change, have a look at the
>> shenandoah-complete
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-complete/>
>>
>> directory,
>> it contains the full combined webrev. Alternatively, there is the file
>> shenandoah-master.patch
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-master.patch>,
>>
>> which is what I intend to commit (and which should be equivalent to the
>> 'shenandoah-complete' webrev).
>>
>> Sections to review (at this point) are the following:
>>   *) shenandoah-gc
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-gc/>
>>
>>  - Actual Shenandoah implementation, almost completely residing in
>> gc/shenandoah
>>
>>   *) shared-gc
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-gc/>
>>  - This is mostly boilerplate that is common to any GC
>>  - referenceProcessor.cpp has a little change to make one assert not
>> fail (next to CMS and G1)
>>  - taskqueue.hpp has some small adjustments to enable subclassing
>>
>>   *) shared-serviceability
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-serviceability/>
>>
>>  - The usual code to support another GC
>>
>>   *) shared-runtime
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-runtime/>
>>
>>  - A number of friends declarations to allow Shenandoah iterators to
>> hook up with,
>>    e.g. ClassLoaderData, CodeCache, etc
>>  - Warning and disabling JFR LeakProfiler
>>  - fieldDescriptor.hpp added is_stable() accessor, for use in
>> Shenandoah C2 optimizations
>>  - Locks initialization in mutexLocker.cpp as usual
>>  - VM operations defines for Shenandoah's VM ops
>>  - globalDefinitions.hpp added UINT64_FORMAT_HEX_W for use in
>> Shenandoah's logging
>>  - The usual macros in macro.hpp
>>
>>   *) shared-build
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-build/>
>>
>>  - Add shenandoah feature, enabled by default, as agreed with
>> Vladimir K. beforehand
>>  - Some flags for shenandoah-enabled compilation to get
>> SUPPORT_BARRIER_ON_PRIMITIVES
>>    and SUPPORT_NOT_TO_SPACE_INVARIANT which is required for
>> Shenandoah's barriers
>>  - --param inline-unit-growth=1000 settings for 2 shenandoah source
>> files, which is
>>    useful to get the whole marking loop inlined (observed significant
>> regression if we
>>    don't)
>>
>>   *) shared-tests
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shared-tests/>
>>
>>  - Test infrastructure to support Shenandoah
>>  - Shenandoah test groups
>>  - Exclude Shenandoah in various tests that can be run with
>> selected GC
>>  - Enable/add configure for Shenandoah for tests that make sense to
>> run with it
>>
>>   *) shenandoah-tests
>> <http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/shenandoah-tests/>
>>
>>  - Shenandoah specific tests, most reside in gc/shenandoah
>> subdirectory
>>  - A couple of tests configurations have been added, e.g.
>> TestGCBasherWithShenandoah.java
>>
>> I intentionally left out shared-compiler for now, because we have some
>> work left to do
>> there, but if you click around you'll find the patch anyway, in case you
>> want to take
>> a peek at it.
>>
>> We have regular builds on:
>>    - {Linux} x {x86_64, x86_32, armhf, aarch64, ppc64el, s390x}
>>    - {Windows} x {x86_64},
>>    - {MacOS X} x {x86_64}
>

RFR (round 1), JDK-8214259: Implementation: JEP 189: Shenandoah: A Low-Pause Garbage Collector

2018-11-26 Thread Roman Kennke
Hi,

This is the first round of changes for including Shenandoah GC into
mainline.
I divided the review into parts that roughly correspond to the mailing lists
that would normally review it, and I divided it into 'shared' code
changes and
'shenandoah' code changes (actually, mostly additions). The intend is to
eventually
push them as single 'combined' changeset, once reviewed.

JEP:
  https://openjdk.java.net/jeps/189
Bug entry:

 https://bugs.openjdk.java.net/browse/JDK-8214259

Webrevs:
  http://cr.openjdk.java.net/~rkennke/shenandoah-upstream/00/

For those who want to see the full change, have a look at the
shenandoah-complete

directory,
it contains the full combined webrev. Alternatively, there is the file
shenandoah-master.patch
,
which is what I intend to commit (and which should be equivalent to the
'shenandoah-complete' webrev).

Sections to review (at this point) are the following:
 *) shenandoah-gc

    - Actual Shenandoah implementation, almost completely residing in
gc/shenandoah

 *) shared-gc

    - This is mostly boilerplate that is common to any GC
    - referenceProcessor.cpp has a little change to make one assert not
fail (next to CMS and G1)
    - taskqueue.hpp has some small adjustments to enable subclassing

 *) shared-serviceability

    - The usual code to support another GC

 *) shared-runtime

    - A number of friends declarations to allow Shenandoah iterators to
hook up with,
  e.g. ClassLoaderData, CodeCache, etc
    - Warning and disabling JFR LeakProfiler
    - fieldDescriptor.hpp added is_stable() accessor, for use in
Shenandoah C2 optimizations
    - Locks initialization in mutexLocker.cpp as usual
    - VM operations defines for Shenandoah's VM ops
    - globalDefinitions.hpp added UINT64_FORMAT_HEX_W for use in
Shenandoah's logging
    - The usual macros in macro.hpp

 *) shared-build

    - Add shenandoah feature, enabled by default, as agreed with
Vladimir K. beforehand
    - Some flags for shenandoah-enabled compilation to get
SUPPORT_BARRIER_ON_PRIMITIVES
  and SUPPORT_NOT_TO_SPACE_INVARIANT which is required for
Shenandoah's barriers
    - --param inline-unit-growth=1000 settings for 2 shenandoah source
files, which is
  useful to get the whole marking loop inlined (observed significant
regression if we
  don't)

 *) shared-tests

    - Test infrastructure to support Shenandoah
    - Shenandoah test groups
    - Exclude Shenandoah in various tests that can be run with selected GC
    - Enable/add configure for Shenandoah for tests that make sense to
run with it

 *) shenandoah-tests

    - Shenandoah specific tests, most reside in gc/shenandoah subdirectory
    - A couple of tests configurations have been added, e.g.
TestGCBasherWithShenandoah.java

I intentionally left out shared-compiler for now, because we have some
work left to do
there, but if you click around you'll find the patch anyway, in case you
want to take
a peek at it.

We have regular builds on:
  - {Linux} x {x86_64, x86_32, armhf, aarch64, ppc64el, s390x}
  - {Windows} x {x86_64},
  - {MacOS X} x {x86_64}

This also routinely passes:
  - the new Shenandoah tests
  - jcstress with/without aggressive Shenandoah verification
  - specjvm2008 with/without aggressive Shenandoah verification


I'd like to thank my collegues at Red Hat: Christine Flood, she deserves
the credit for being the original inventor of Shenandoah, Aleksey
Shiplëv, Roland Westrelin & Zhengyu Gu for their countless
contributions, everybody else in Red Hat's OpenJDK team for testing,
advice and support, my collegues in Oracle's GC, runtime and compiler
teams for tirelessly helping with and reviewing all the GC interface and
related changes, and of course the many early adopters for reporting
bugs and success stories and feature requests: we wouldn't be here
without any of you!

Best regards,
Roman



Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Roman Kennke



> On 2018-11-01 11:54, Aleksey Shipilev wrote:
>> On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
>>> But then again, it might just signal that the list of headers
>>> included in the PCH is no longer
>>> optimal. If it used to be the case that ~100 header files were so
>>> interlinked, that changing any of
>>> them caused recompilation of all files that included it and all the
>>> other 100 header files on the
>>> PCH list, then there was a net gain for using PCH and no "punishment".
>>>
>>> But nowadays this list might be far too large. Perhaps there's just
>>> only a core set of say 20 header
>>> files that are universally (or almost universally) included, and
>>> that's all that should be in the
>>> PCH list then. My guess is that, with a proper selection of header
>>> files, PCH will still be a benefit.
>> I agree. This smells like inefficient PCH list. We can improve that,
>> but I think that would be a
>> lower priority, given the abundance of CPU power we use to compile
>> Hotspot. In my mind, the decisive
>> factor for disabling PCH is to keep proper includes at all times,
>> without masking it with PCH. Half
>> of the trivial bugs I submit against hotspot are #include differences
>> that show up in CI that builds
>> without PCH.
>>
>> So this is my ideal world:
>>   a) Efficient PCH list enabled by default for development pleasure;
>>   b) CIs build without PCH all the time (jdk-submit tier1 included!);
>>
>> Since we don't yet have (a), and (b) seems to be tedious, regardless
>> how many times both Red Hat and
>> SAP people ask for it, disabling PCH by default feels like a good
>> fallback.
> 
> Should just CI builds default to non-PCH, or all builds (that is, should
> "configure" default to non-PCH on linux)? Maybe the former is better --
> one thing that the test numbers here has not shown is if incremental
> recompiles are improved by PCH. My gut feeling is that they really
> should -- once you've created your PCH, subsequent recompiles will be
> faster. So the developer default should perhaps be to keep PCH, and we
> should only configure the CI builds to do without PCH.

I don't know. I usually disable PCH to avoid getting bad surprises once
my patch goes through CI. I think this should be consistent. I waste
more time building with and without PCH and fixing those bugs than I
save (if anything) a few seconds build time.

Roman



Re: Stop using precompiled headers for Linux?

2018-10-30 Thread Roman Kennke
I'd be in favour of disabling by default.

Roman

> Is there any advantage of using precompiled headers on Linux? It's on by
> default and we keep having breakage where someone would forget to add
> #include. The latest instance is JDK-8213148.
> 
> I just turn on precompiled headers explicitly in all my builds. I don't
> see any difference in build time (at least not significant enough for me
> to bother).
> 
> Should we disable it by default on Linux?
> 
> Thanks
> 
> - Ioi
> 
> 



Re: RFR: JDK-8211071: unpack.cpp fails to compile with statement has no effect [-Werror=unused-value]

2018-09-27 Thread Roman Kennke
Hi Christoph & Magnus, thanks for reviewing!

Am 27.09.18 um 08:22 schrieb Langer, Christoph:
> Hi Roman,
> 
> this looks good to me. +1
> 
> Best regards
> Christoph
> 
>> -Original Message-
>> From: build-dev  On Behalf Of
>> Roman Kennke
>> Sent: Mittwoch, 26. September 2018 19:24
>> To: Magnus Ihse Bursie ; core-libs-
>> d...@openjdk.java.net
>> Cc: build-dev@openjdk.java.net
>> Subject: Re: RFR: JDK-8211071: unpack.cpp fails to compile with statement
>> has no effect [-Werror=unused-value]
>>
>> Ping core-libs?
>>
>> Roman
>>
>> Am 25.09.18 um 11:06 schrieb Magnus Ihse Bursie:
>>>
>>>> 25 sep. 2018 kl. 10:21 skrev Roman Kennke :
>>>>
>>>> Not sure this is the correct list. Please redirect as appropriate.
>>>
>>> I believe core-libs is the appropriate place. Cc:d.
>>>
>>>>
>>>> Please review the following proposed change:
>>>>
>>>>
>>>> There are 3 asserts in unpack.cpp which check only constants, and which
>>>> the compiler rejects as 'has no effect' (in fastdebug builds). This
>>>> seems to be caused by:
>>>>
>>>> https://bugs.openjdk.java.net/browse/JDK-8211029
>>>>
>>>> I propose to fix this by defining a STATIC_ASSERT which can evaluate and
>>>> reject this at compile time:
>>>>
>>>> Webrev:
>>>> http://cr.openjdk.java.net/~rkennke/JDK-8211071/webrev.00/
>>>
>>> From a build perspective, this looks good. But please let someone from
>> core-libs review also.
>>>
>>>> Bug:
>>>> https://bugs.openjdk.java.net/browse/JDK-8211071
>>>>
>>>> It might be useful to define the STATIC_ASSERT in a more central place
>>>> to be used elsewhere too. For example, there is a similar #define in
>>>> Hotspot's debug.hpp
>>>
>>> Unfortunately, we do not have a good place to share definitions like this
>> across JDK libraries. :-( The need for that has struck me more than once. For
>> some stuff, we mis-use jni.h. But it would definitely be out of line to add a
>> STATIC_ASSERT there.
>>>
>>> /Magnus
>>>
>>>>
>>>> Thanks, Roman
>>>>
>>>
> 



Re: RFR: JDK-8211071: unpack.cpp fails to compile with statement has no effect [-Werror=unused-value]

2018-09-26 Thread Roman Kennke
Ping core-libs?

Roman

Am 25.09.18 um 11:06 schrieb Magnus Ihse Bursie:
> 
>> 25 sep. 2018 kl. 10:21 skrev Roman Kennke :
>>
>> Not sure this is the correct list. Please redirect as appropriate.
> 
> I believe core-libs is the appropriate place. Cc:d. 
> 
>>
>> Please review the following proposed change:
>>
>>
>> There are 3 asserts in unpack.cpp which check only constants, and which
>> the compiler rejects as 'has no effect' (in fastdebug builds). This
>> seems to be caused by:
>>
>> https://bugs.openjdk.java.net/browse/JDK-8211029
>>
>> I propose to fix this by defining a STATIC_ASSERT which can evaluate and
>> reject this at compile time:
>>
>> Webrev:
>> http://cr.openjdk.java.net/~rkennke/JDK-8211071/webrev.00/
> 
> From a build perspective, this looks good. But please let someone from 
> core-libs review also. 
> 
>> Bug:
>> https://bugs.openjdk.java.net/browse/JDK-8211071
>>
>> It might be useful to define the STATIC_ASSERT in a more central place
>> to be used elsewhere too. For example, there is a similar #define in
>> Hotspot's debug.hpp
> 
> Unfortunately, we do not have a good place to share definitions like this 
> across JDK libraries. :-( The need for that has struck me more than once. For 
> some stuff, we mis-use jni.h. But it would definitely be out of line to add a 
> STATIC_ASSERT there. 
> 
> /Magnus
> 
>>
>> Thanks, Roman
>>
> 



Re: build issues after 8211029 on gcc4.8 and our porting Linux platforms (ppc64(le)/ s390x)

2018-09-25 Thread Roman Kennke
Related, I've also filed:

https://bugs.openjdk.java.net/browse/JDK-8211071

and waiting for review here:
http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-September/055666.html

Thanks,
Roman

> Hello,  one additonal info , my colleague looking into   the compile issues 
> after 8211029  found this error ,
> when compiling  on  linuxx86_64 :
> 
> === Output from failing command(s) repeated here ===
> * For target support_native_java.base_libnet_DatagramPacket.o:
> In file included from 
> /OpenJDK/8210319/jdk/src/java.base/share/native/libnet/net_util.h:31:0,
> from 
> /OpenJDK/8210319/jdk/src/java.base/share/native/libnet/DatagramPacket.c:27:
> /OpenJDK/8210319/jdk/src/java.base/unix/native/libnet/net_util_md.h:50:7: 
> error: "__solaris__" is not defined [-Werror=undef]
> #elif __solaris__
>^
> cc1: all warnings being treated as errors
> * For target support_native_java.base_libnet_Inet4Address.o:
> In file included from 
> /OpenJDK/8210319/jdk/src/java.base/share/native/libnet/net_util.h:31:0,
>  from 
> /OpenJDK/8210319/jdk/src/java.base/share/native/libnet/Inet4Address.c:29:
> /OpenJDK/8210319/jdk/src/java.base/unix/native/libnet/net_util_md.h:50:7: 
> error: "__solaris__" is not defined [-Werror=undef]
> #elif __solaris__
> 
> Obviously  "__solaris__"is  not defined  on Linux  so I wonder   how  you 
> could compile  this ?
> ( the coding might  need  improvement  however  the test  should be like  
> #elif defined(__solaris__)   )
> 
> 
> Best regards, Matthias
> 
> 
> From: Baesken, Matthias
> Sent: Dienstag, 25. September 2018 16:34
> To: 'build-dev@openjdk.java.net' 
> Cc: Schmidt, Lutz ; Doerr, Martin 
> Subject: build issues after 8211029 on gcc4.8 and our porting Linux platforms 
> (ppc64(le)/ s390x)
> 
> 
> Hello, it looks like
> 
> 8211029: Have a common set of enabled warnings for all native libraries
> 
> breaks a lot of our Linux based builds.
> Our gcc 4.8   misses some of the introduced command line options :
> 
> cc1plus: error: unrecognized command line option 
> "-Wno-misleading-indentation" [-Werror]
> cc1plus: error: unrecognized command line option "-Wno-implicit-fallthrough" 
> [-Werror]
> cc1plus: error: unrecognized command line option "-Wno-int-in-bool-context" 
> [-Werror]
> 
> Additionally ,  the added -Werror=switch  triggers a LOT of errors on our  
> porting platforms, e.g.  linux ppc64 le :
> 
> /build_ci_jdk_jdk_linuxppc64le/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp:719:10:
>  error: enumeration value '_Double_valueOf' not handled in switch 
> [-Werror=switch]
> /build_ci_jdk_jdk_linuxppc64le/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp:719:10:
>  error: enumeration value '_forEachRemaining' not handled in switch 
> [-Werror=switch]
> /build_ci_jdk_jdk_linuxppc64le/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp:719:10:
>  error: enumeration value 'ID_LIMIT' not handled in switch [-Werror=switch]
> /build_ci_jdk_jdk_linuxppc64le/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp:719:10:
>  error: enumeration value 'LAST_COMPILER_INLINE' not handled in switch 
> [-Werror=switch]
> /build_ci_jdk_jdk_linuxppc64le/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp:719:10:
>  error: enumeration value 'FIRST_MH_SIG_POLY' not handled in switch 
> [-Werror=switch]
> /build_ci_jdk_jdk_linuxppc64le/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp:719:10:
>  error: enumeration value 'FIRST_MH_STATIC' not handled in switch 
> [-Werror=switch]
> /build_ci_jdk_jdk_linuxppc64le/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp:719:10:
>  error: enumeration value 'LAST_MH_SIG_POLY' not handled in switch 
> [-Werror=switch]
> /build_ci_jdk_jdk_linuxppc64le/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp:719:10:
>  error: enumeration value 'FIRST_ID' not handled in switch [-Werror=switch]
> 
> Could we get rid of  the  -Werror=switch   at least for now ?
> Maybe it should be disabled for ppc64 / ppc64le  /  s390x  like it has been 
> done for zero ?
> 
> 4.13+
> 4.14 ifeq ($(call check-jvm-feature, zero), true)
> 4.15-  DISABLED_WARNINGS_gcc += return-type
> 4.16+  DISABLED_WARNINGS_gcc += return-type switch
> 4.17 endif
> 
> 
> Regarding the unrecognized command line options ,  I suggest to still support 
> older gcc versions;
> what would be the minimal gcc version that supports 8211029 ?
> 
> Best regards, Matthias
> 




RFR: JDK-8211071: unpack.cpp fails to compile with statement has no effect [-Werror=unused-value]

2018-09-25 Thread Roman Kennke
Not sure this is the correct list. Please redirect as appropriate.

Please review the following proposed change:


There are 3 asserts in unpack.cpp which check only constants, and which
the compiler rejects as 'has no effect' (in fastdebug builds). This
seems to be caused by:

https://bugs.openjdk.java.net/browse/JDK-8211029

I propose to fix this by defining a STATIC_ASSERT which can evaluate and
reject this at compile time:

Webrev:
http://cr.openjdk.java.net/~rkennke/JDK-8211071/webrev.00/
Bug:
https://bugs.openjdk.java.net/browse/JDK-8211071

It might be useful to define the STATIC_ASSERT in a more central place
to be used elsewhere too. For example, there is a similar #define in
Hotspot's debug.hpp

Thanks, Roman



RFR: JDK-8210221: missing-field-initializer warnings/errors while building vmTestBase

2018-08-30 Thread Roman Kennke
Hi there,

JDK-8209611 seems to cause many missing-field-initializer warning-errors
like below:

I am using gcc 4.8.5

Compiling libvmdeath001.cpp (for libvmdeath001.so)
In file included from
/home/rkennke/src/openjdk/shenandoah-jdk/test/hotspot/jtreg/vmTestbase/nsk/share/jvmti/unit/libHeap.cpp:29:0:
/home/rkennke/src/openjdk/shenandoah-jdk/test/hotspot/jtreg/vmTestbase/nsk/share/jvmti/jvmti_FollowRefObjects.cpp:38:44:
error: missing initializer for member
'_jvmtiHeapCallbacks::heap_iteration_callback'
[-Werror=missing-field-initializers]
 jvmtiHeapCallbacks g_wrongHeapCallbacks = {};


There are many more of those in the build log.

I propose to disable this warning for now.

Bug:
https://bugs.openjdk.java.net/browse/JDK-8210221
Webrev:
http://cr.openjdk.java.net/~rkennke/JDK-8210221/webrev.00/

What do you think?

Roman



Re: RFR: 8171853: Remove Shark compiler

2017-10-15 Thread Roman Kennke


Hi David,

thanks for reviewing and testing!

The interaction between JEPs and patches going in is not really clear to 
me, nor is it well documented. For example, we're already pushing 
patches for JEP 304: Garbage Collection Interface, even though it's only 
in 'candidate' state...


In any case, I'll ping Mark Reinhold about moving the Shark JEP forward.

Thanks again,
Roman

My internal JPRT run went fine. So this just needs a build team 
signoff from the perspective of the patch.


However, as this has had a JEP submitted for it, the code changes can 
not be pushed until the JEP has been targeted.


Thanks,
David

On 16/10/2017 8:08 AM, David Holmes wrote:

Looks good.

Thanks,
David

On 16/10/2017 8:00 AM, Roman Kennke wrote:


Ok, I fixed all the comments you mentioned.

Differential (against webrev.01):
http://cr.openjdk.java.net/~rkennke/8171853/webrev.03.diff/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.03.diff/>

Full webrev:
http://cr.openjdk.java.net/~rkennke/8171853/webrev.03/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.03/>


Roman


Just spotted this:

./hotspot/jtreg/compiler/whitebox/CompilerWhiteBoxTest.java: /** 
{@code CompLevel::CompLevel_full_optimization} -- C2 or Shark */


David

On 16/10/2017 7:25 AM, David Holmes wrote:

On 16/10/2017 7:01 AM, Roman Kennke wrote:

Hi David,

thanks!

I'm uploading a 2nd revision of the patch that excludes the 
generated-configure.sh part, and adds a smallish Zero-related fix.


http://cr.openjdk.java.net/~rkennke/8171853/webrev.01/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.01/>


Can you point me to the exact change please as I don't want to 
re-examine it all. :)


I'll pull this in and do a test build run internally.

Thanks,
David


Thanks, Roman



Hi Roman,

The build changes must be reviewed on build-dev - now cc'd.

Thanks,
David

On 15/10/2017 8:41 AM, Roman Kennke wrote:
The JEP to remove the Shark compiler has received exclusively 
positive feedback (JDK-8189173) on zero-dev. So here comes the 
big patch to remove it.


What I have done:

grep -i -R shark src
grep -i -R shark make
grep -i -R shark doc
grep -i -R shark doc

and purged any reference to shark. Almost everything was 
straightforward.


The only things I wasn't really sure of:

- in globals.hpp, I re-arranged the KIND_* bits to account for 
the gap that removing KIND_SHARK left. I hope that's good?
- in relocInfo_zero.hpp I put a ShouldNotCallThis() in 
pd_address_in_code(), I am not sure it is the right thing to 
do. If not, what *would* be the right thing?


Then of course I did:

rm -rf src/hotspot/share/shark

I also went through the build machinery and removed stuff 
related to Shark and LLVM libs.


Now the only references in the whole JDK tree to shark is a 
'Shark Bay' in a timezone file, and 'Wireshark' in some tests ;-)


I tested by building a regular x86 JVM and running JTREG tests. 
All looks fine.


- I could not build zero because it seems broken because of the 
recent Atomic::* changes
- I could not test any of the other arches that seemed to 
reference Shark (arm and sparc)


Here's the full webrev:

http://cr.openjdk.java.net/~rkennke/8171853/webrev.00/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.00/>


Can I get a review on this?

Thanks, Roman









Re: RFR: 8171853: Remove Shark compiler

2017-10-15 Thread Roman Kennke


Ok, I fixed all the comments you mentioned.

Differential (against webrev.01):
http://cr.openjdk.java.net/~rkennke/8171853/webrev.03.diff/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.03.diff/>

Full webrev:
http://cr.openjdk.java.net/~rkennke/8171853/webrev.03/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.03/>


Roman


Just spotted this:

./hotspot/jtreg/compiler/whitebox/CompilerWhiteBoxTest.java: /** 
{@code CompLevel::CompLevel_full_optimization} -- C2 or Shark */


David

On 16/10/2017 7:25 AM, David Holmes wrote:

On 16/10/2017 7:01 AM, Roman Kennke wrote:

Hi David,

thanks!

I'm uploading a 2nd revision of the patch that excludes the 
generated-configure.sh part, and adds a smallish Zero-related fix.


http://cr.openjdk.java.net/~rkennke/8171853/webrev.01/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.01/>


Can you point me to the exact change please as I don't want to 
re-examine it all. :)


I'll pull this in and do a test build run internally.

Thanks,
David


Thanks, Roman



Hi Roman,

The build changes must be reviewed on build-dev - now cc'd.

Thanks,
David

On 15/10/2017 8:41 AM, Roman Kennke wrote:
The JEP to remove the Shark compiler has received exclusively 
positive feedback (JDK-8189173) on zero-dev. So here comes the big 
patch to remove it.


What I have done:

grep -i -R shark src
grep -i -R shark make
grep -i -R shark doc
grep -i -R shark doc

and purged any reference to shark. Almost everything was 
straightforward.


The only things I wasn't really sure of:

- in globals.hpp, I re-arranged the KIND_* bits to account for the 
gap that removing KIND_SHARK left. I hope that's good?
- in relocInfo_zero.hpp I put a ShouldNotCallThis() in 
pd_address_in_code(), I am not sure it is the right thing to do. 
If not, what *would* be the right thing?


Then of course I did:

rm -rf src/hotspot/share/shark

I also went through the build machinery and removed stuff related 
to Shark and LLVM libs.


Now the only references in the whole JDK tree to shark is a 'Shark 
Bay' in a timezone file, and 'Wireshark' in some tests ;-)


I tested by building a regular x86 JVM and running JTREG tests. 
All looks fine.


- I could not build zero because it seems broken because of the 
recent Atomic::* changes
- I could not test any of the other arches that seemed to 
reference Shark (arm and sparc)


Here's the full webrev:

http://cr.openjdk.java.net/~rkennke/8171853/webrev.00/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.00/>


Can I get a review on this?

Thanks, Roman







Re: RFR: 8171853: Remove Shark compiler

2017-10-15 Thread Roman Kennke

Am 15.10.2017 um 23:25 schrieb David Holmes:

On 16/10/2017 7:01 AM, Roman Kennke wrote:

Hi David,

thanks!

I'm uploading a 2nd revision of the patch that excludes the 
generated-configure.sh part, and adds a smallish Zero-related fix.


http://cr.openjdk.java.net/~rkennke/8171853/webrev.01/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.01/>


Can you point me to the exact change please as I don't want to 
re-examine it all. :)
Oops, sorry. The diff between 00 and 01 is this (apart from 
generated-configure.sh):


diff --git a/src/hotspot/share/utilities/vmError.cpp 
b/src/hotspot/share/utilities/vmError.cpp

--- a/src/hotspot/share/utilities/vmError.cpp
+++ b/src/hotspot/share/utilities/vmError.cpp
@@ -192,6 +192,7 @@
 st->cr();

 // Print the frames
+    StackFrameStream sfs(jt);
 for(int i = 0; !sfs.is_done(); sfs.next(), i++) {
   sfs.current()->zero_print_on_error(i, st, buf, buflen);
   st->cr();

I.e. I added back the sfs variable that I accidentally removed in webrev.00.




Re: RFR: 8171853: Remove Shark compiler

2017-10-15 Thread Roman Kennke

Hi David,

thanks for reviewing!



One observation in src/hotspot/cpu/zero/sharedRuntime_zero.cpp, these 
includes would seem to be impossible:


  38 #ifdef COMPILER1
  39 #include "c1/c1_Runtime1.hpp"
  40 #endif
  41 #ifdef COMPILER2
  42 #include "opto/runtime.hpp"
  43 #endif

no?


I have no idea. It is at least theoretically possible to have a platform 
with C1 and/or C2 support based on the Zero interpreter? I'm leaving 
that in for now as it was pre-existing and not related to Shark removal, ok?




In src/hotspot/share/ci/ciEnv.cpp you can just delete the comment 
entirely as it's obviously C2:


if (is_c2_compile(comp_level)) { // C2

Ditto in src/hotspot/share/compiler/compileBroker.cpp

! // C2
  make_thread(name_buffer, _c2_compile_queue, counters, 
_compilers[1], compiler_thread, CHECK);


Ok, right. For consistency, I also remove // C1 in ciEnv.cpp similarily 
obvious is_c1_compile() call :-)


New webrev:

http://cr.openjdk.java.net/~rkennke/8171853/webrev.02/ 



Roman


Re: RFR: 8171853: Remove Shark compiler

2017-10-15 Thread Roman Kennke

Hi David,

thanks!

I'm uploading a 2nd revision of the patch that excludes the 
generated-configure.sh part, and adds a smallish Zero-related fix.


http://cr.openjdk.java.net/~rkennke/8171853/webrev.01/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.01/>


Thanks, Roman



Hi Roman,

The build changes must be reviewed on build-dev - now cc'd.

Thanks,
David

On 15/10/2017 8:41 AM, Roman Kennke wrote:
The JEP to remove the Shark compiler has received exclusively 
positive feedback (JDK-8189173) on zero-dev. So here comes the big 
patch to remove it.


What I have done:

grep -i -R shark src
grep -i -R shark make
grep -i -R shark doc
grep -i -R shark doc

and purged any reference to shark. Almost everything was 
straightforward.


The only things I wasn't really sure of:

- in globals.hpp, I re-arranged the KIND_* bits to account for the 
gap that removing KIND_SHARK left. I hope that's good?
- in relocInfo_zero.hpp I put a ShouldNotCallThis() in 
pd_address_in_code(), I am not sure it is the right thing to do. If 
not, what *would* be the right thing?


Then of course I did:

rm -rf src/hotspot/share/shark

I also went through the build machinery and removed stuff related to 
Shark and LLVM libs.


Now the only references in the whole JDK tree to shark is a 'Shark 
Bay' in a timezone file, and 'Wireshark' in some tests ;-)


I tested by building a regular x86 JVM and running JTREG tests. All 
looks fine.


- I could not build zero because it seems broken because of the 
recent Atomic::* changes
- I could not test any of the other arches that seemed to reference 
Shark (arm and sparc)


Here's the full webrev:

http://cr.openjdk.java.net/~rkennke/8171853/webrev.00/ 
<http://cr.openjdk.java.net/%7Erkennke/8171853/webrev.00/>


Can I get a review on this?

Thanks, Roman





Re: [PATCH] Fix Shark build in JDK9

2015-01-13 Thread Roman Kennke
Hi David,


  - In ciTypeFlow.cpp only include some files and code only when building
  C2. I don't think that code makes sense outside of C2. (That's the issue
  that you've seen).
 
 Looks okay but someone from compiler team needs to comment. There may be 
 other code that need adjusting.

It might be less intrusive to guard this with #ifndef SHARK instead of
#ifdef COMPILER2. I don't think it makes a difference though.

  - In allocation.hpp, exclude the operator overrides for new etc, LLVM
  does allocations itself, and this would blow up.
 
 I'm intrigued as the allocation strategy is not tied to the compiler but 
 pervasive to the whole VM runtime.

In a comment it says this whole block is disabled in PRODUCT versions,
because of 3rd party code that might be in use. This is basically the
situation with Shark: we need to be able to do allocations in LLVM, and
LLVM doesn't know about Hotspot's allocation safeguards. That's why I
disable it for Shark builds.

  - In handles.hpp and handles.inline.hpp, I added an argument
  check_in_stack so that I can disable the in-stack check for the
  SharkNativeWrapper. The SharkNativeWrapper is allocated on-heap and the
  methodHandle inside the SharkNativeWrapper. I could have excluded that
  check altogther using an #ifndef SHARK, but the way I implemented it
  seemed more finegrained.
 
 I'd prefer an ifndef SHARK rather than change the API.

Yeah ok, I understand that. However, I do not know how I can do an
#ifdef inside a #define block. I'd need to duplicate the whole block,
and put #ifdefs around those. This raises the question about code
maintainability though, any changes in one block need to be done in the
other block as well. I can do that if you prefer over the API change.
That being said, multi-line-#defines are evil anyway ;-)

Roman




Re: [PATCH] Fix Shark build in JDK9

2015-01-13 Thread Roman Kennke
Ok I think I found a way to avoid copying the whole #define block just
for one line. I define that line before (and empty in case of SHARK) and
use that inside the big #define.

http://cr.openjdk.java.net/~rkennke/shark-build-hotspot/webrev.02/

Please review and push if ok.

Roman

Am Freitag, den 09.01.2015 um 15:27 +1000 schrieb David Holmes:
 Hi Roman,
 
 Commenting on the hotspot changes ...
 
 On 8/01/2015 9:01 PM, Roman Kennke wrote:
  Hi Erik,
 
  I'm CC-ing hotspot-dev for review of Hotspot code related changes.
 
  Yes, some additional changes to Hotspot are required. This is the full
  set of changes needed to build and run Shark:
 
  http://cr.openjdk.java.net/~rkennke/shark-build-hotspot/webrev.01/
 
  In detail:
 
  - In the Makefile fix a typo to be able to build unzipped debuginfo.
 
 Ok.
 
  - In ciTypeFlow.cpp only include some files and code only when building
  C2. I don't think that code makes sense outside of C2. (That's the issue
  that you've seen).
 
 Looks okay but someone from compiler team needs to comment. There may be 
 other code that need adjusting.
 
  - In systemDictionary.cpp, exclude some code for Shark that creates and
  checks native wrappers for method handle intrinsics. Invokedynamic stuff
  is not yet implemented in Shark, so we can't do this.
 
 Ok.
 
  - In allocation.hpp, exclude the operator overrides for new etc, LLVM
  does allocations itself, and this would blow up.
 
 I'm intrigued as the allocation strategy is not tied to the compiler but 
 pervasive to the whole VM runtime.
 
  - In handles.hpp and handles.inline.hpp, I added an argument
  check_in_stack so that I can disable the in-stack check for the
  SharkNativeWrapper. The SharkNativeWrapper is allocated on-heap and the
  methodHandle inside the SharkNativeWrapper. I could have excluded that
  check altogther using an #ifndef SHARK, but the way I implemented it
  seemed more finegrained.
 
 I'd prefer an ifndef SHARK rather than change the API.
 
 Thanks,
 David
 
  - In SharkCompiler, I pulled out the code to initialize LLVM into its
  own method, and the call to set_state(initialized) out of the
  execution-engine-lock. This would otherwise complain about wrong
  lock-ordering.
  - In SharkRuntime, I changed the cast from intptr_t to oop to work with
  the new picky compilers ;-)
 
  Please review.
 
  Thanks and best regards,
  Roman
 
  Am Donnerstag, den 08.01.2015 um 11:03 +0100 schrieb Erik Joelsson:
  Hello Roman,
 
  This addition looks good to me.
 
  Thinking about what the others said, it might be inconvenient to have
  all this be pushed to different forests. I tried applying all the
  changes to jdk9/hs-rt, but I can't seem to build zeroshark.. Did you
  have more hotspot changes to be able to finish the build?
 
  My failure is:
 
 ciTypeFlow.o
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/ci/ciTypeFlow.cpp
  In file included from
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/regmask.hpp:29:0,
 from
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/compile.hpp:40,
 from
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/ci/ciTypeFlow.cpp:38:
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/optoreg.hpp:40:39:
  fatal error: adfiles/adGlobals_zero.hpp: No such file or directory
 
From what I can see, adfiles are not generated for zero or zeroshark
  builds, so the include should probably be removed.
 
  Would you still like me to push what you currently have to hs-rt?
 
  /Erik
 
  On 2015-01-07 21:21, Roman Kennke wrote:
  Hi Erik,
 
  When I built Zero and Shark on my Raspberry Pi, I noticed another
  problem when copying jvm.cfg into the right places. I fixed it in a
  similar way as I did for the SA stuff:
 
  http://cr.openjdk.java.net/~rkennke/shark-build-jdk/webrev.02/
 
  I think that should be all for now.
 
  Please push that into JDK9 if you think that's fine.
 
  Best regards,
  Roman
 
  Am Mittwoch, den 07.01.2015 um 17:49 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:29, Roman Kennke wrote:
  Am Mittwoch, den 07.01.2015 um 17:16 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:11, Roman Kennke wrote:
  Hi Erik,
 
  Do you have a bug for this?
  No.
 
  I haven't pushed any changes to JDK in a while. Is it possible in the
  meantime for me to create my own bugs? Otherwise, please file one for
  me :-)
  You should be able to log in to https://bugs.openjdk.java.net and 
  create
  bugs since you have an OpenJDK identity.
  Done:
 
  https://bugs.openjdk.java.net/browse/JDK-8068598
 
  While I'm at it, is it possible for me to push my own changes (except
  hotspot of course)? If yes, what needs to be done for regenerating the
  configure files? Simply run autogen.sh in common/autoconf with whatever
  version of autotools I have? Or doesn't it make sense at all b/c you
  need to regenerate your closed scripts?
  It requires you to run common/autogen.sh yes, and that will require you
  to have autoconf 2.69 installed

Re: [PATCH] Fix Shark build in JDK9

2015-01-08 Thread Roman Kennke
Am Donnerstag, den 08.01.2015 um 21:20 +1000 schrieb David Holmes:
 On 8/01/2015 9:05 PM, Roman Kennke wrote:
  Oh and notice that if you try to build it yourself, use a version of
  LLVM  3.5. In 3.5, C++11 is used/required, and OpenJDK doesn't support
  C++11 yet. (are there any plans about this?) I'd recommend LLVM 3.4.2.
 
 C++11? We still have workarounds for compilers that don't support C99 :(

Well ok, fair enough ;-)

I guess for my purposes it's good enough to pass some GCC flag to turn
on C++11 support, so that the new LLVM headers can be used. That's all
that I need.

Roman

 
 David
 
  Roman
 
  Am Donnerstag, den 08.01.2015 um 11:03 +0100 schrieb Erik Joelsson:
  Hello Roman,
 
  This addition looks good to me.
 
  Thinking about what the others said, it might be inconvenient to have
  all this be pushed to different forests. I tried applying all the
  changes to jdk9/hs-rt, but I can't seem to build zeroshark.. Did you
  have more hotspot changes to be able to finish the build?
 
  My failure is:
 
 ciTypeFlow.o
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/ci/ciTypeFlow.cpp
  In file included from
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/regmask.hpp:29:0,
 from
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/compile.hpp:40,
 from
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/ci/ciTypeFlow.cpp:38:
  /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/optoreg.hpp:40:39:
  fatal error: adfiles/adGlobals_zero.hpp: No such file or directory
 
From what I can see, adfiles are not generated for zero or zeroshark
  builds, so the include should probably be removed.
 
  Would you still like me to push what you currently have to hs-rt?
 
  /Erik
 
  On 2015-01-07 21:21, Roman Kennke wrote:
  Hi Erik,
 
  When I built Zero and Shark on my Raspberry Pi, I noticed another
  problem when copying jvm.cfg into the right places. I fixed it in a
  similar way as I did for the SA stuff:
 
  http://cr.openjdk.java.net/~rkennke/shark-build-jdk/webrev.02/
 
  I think that should be all for now.
 
  Please push that into JDK9 if you think that's fine.
 
  Best regards,
  Roman
 
  Am Mittwoch, den 07.01.2015 um 17:49 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:29, Roman Kennke wrote:
  Am Mittwoch, den 07.01.2015 um 17:16 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:11, Roman Kennke wrote:
  Hi Erik,
 
  Do you have a bug for this?
  No.
 
  I haven't pushed any changes to JDK in a while. Is it possible in the
  meantime for me to create my own bugs? Otherwise, please file one for
  me :-)
  You should be able to log in to https://bugs.openjdk.java.net and 
  create
  bugs since you have an OpenJDK identity.
  Done:
 
  https://bugs.openjdk.java.net/browse/JDK-8068598
 
  While I'm at it, is it possible for me to push my own changes (except
  hotspot of course)? If yes, what needs to be done for regenerating the
  configure files? Simply run autogen.sh in common/autoconf with whatever
  version of autotools I have? Or doesn't it make sense at all b/c you
  need to regenerate your closed scripts?
  It requires you to run common/autogen.sh yes, and that will require you
  to have autoconf 2.69 installed. But since we also need to regenerate
  the closed version, I can take care of the push for you. Will do it
  tomorrow if that's ok?
 
  /Erik
  Roman
 
 
 
 
 




Re: [PATCH] Fix Shark build in JDK9

2015-01-08 Thread Roman Kennke
Hi Erik,

I'm CC-ing hotspot-dev for review of Hotspot code related changes.

Yes, some additional changes to Hotspot are required. This is the full
set of changes needed to build and run Shark:

http://cr.openjdk.java.net/~rkennke/shark-build-hotspot/webrev.01/

In detail:

- In the Makefile fix a typo to be able to build unzipped debuginfo.
- In ciTypeFlow.cpp only include some files and code only when building
C2. I don't think that code makes sense outside of C2. (That's the issue
that you've seen).
- In systemDictionary.cpp, exclude some code for Shark that creates and
checks native wrappers for method handle intrinsics. Invokedynamic stuff
is not yet implemented in Shark, so we can't do this.
- In allocation.hpp, exclude the operator overrides for new etc, LLVM
does allocations itself, and this would blow up.
- In handles.hpp and handles.inline.hpp, I added an argument
check_in_stack so that I can disable the in-stack check for the
SharkNativeWrapper. The SharkNativeWrapper is allocated on-heap and the
methodHandle inside the SharkNativeWrapper. I could have excluded that
check altogther using an #ifndef SHARK, but the way I implemented it
seemed more finegrained.
- In SharkCompiler, I pulled out the code to initialize LLVM into its
own method, and the call to set_state(initialized) out of the
execution-engine-lock. This would otherwise complain about wrong
lock-ordering.
- In SharkRuntime, I changed the cast from intptr_t to oop to work with
the new picky compilers ;-)

Please review.

Thanks and best regards,
Roman

Am Donnerstag, den 08.01.2015 um 11:03 +0100 schrieb Erik Joelsson:
 Hello Roman,
 
 This addition looks good to me.
 
 Thinking about what the others said, it might be inconvenient to have 
 all this be pushed to different forests. I tried applying all the 
 changes to jdk9/hs-rt, but I can't seem to build zeroshark.. Did you 
 have more hotspot changes to be able to finish the build?
 
 My failure is:
 
   ciTypeFlow.o 
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/ci/ciTypeFlow.cpp
 In file included from 
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/regmask.hpp:29:0,
   from 
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/compile.hpp:40,
   from 
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/ci/ciTypeFlow.cpp:38:
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/optoreg.hpp:40:39: 
 fatal error: adfiles/adGlobals_zero.hpp: No such file or directory
 
  From what I can see, adfiles are not generated for zero or zeroshark 
 builds, so the include should probably be removed.
 
 Would you still like me to push what you currently have to hs-rt?
 
 /Erik
 
 On 2015-01-07 21:21, Roman Kennke wrote:
  Hi Erik,
 
  When I built Zero and Shark on my Raspberry Pi, I noticed another
  problem when copying jvm.cfg into the right places. I fixed it in a
  similar way as I did for the SA stuff:
 
  http://cr.openjdk.java.net/~rkennke/shark-build-jdk/webrev.02/
 
  I think that should be all for now.
 
  Please push that into JDK9 if you think that's fine.
 
  Best regards,
  Roman
 
  Am Mittwoch, den 07.01.2015 um 17:49 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:29, Roman Kennke wrote:
  Am Mittwoch, den 07.01.2015 um 17:16 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:11, Roman Kennke wrote:
  Hi Erik,
 
  Do you have a bug for this?
  No.
 
  I haven't pushed any changes to JDK in a while. Is it possible in the
  meantime for me to create my own bugs? Otherwise, please file one for
  me :-)
  You should be able to log in to https://bugs.openjdk.java.net and create
  bugs since you have an OpenJDK identity.
  Done:
 
  https://bugs.openjdk.java.net/browse/JDK-8068598
 
  While I'm at it, is it possible for me to push my own changes (except
  hotspot of course)? If yes, what needs to be done for regenerating the
  configure files? Simply run autogen.sh in common/autoconf with whatever
  version of autotools I have? Or doesn't it make sense at all b/c you
  need to regenerate your closed scripts?
  It requires you to run common/autogen.sh yes, and that will require you
  to have autoconf 2.69 installed. But since we also need to regenerate
  the closed version, I can take care of the push for you. Will do it
  tomorrow if that's ok?
 
  /Erik
  Roman
 
 
 




Re: [PATCH] Fix Shark build in JDK9

2015-01-08 Thread Roman Kennke
Oh and notice that if you try to build it yourself, use a version of
LLVM  3.5. In 3.5, C++11 is used/required, and OpenJDK doesn't support
C++11 yet. (are there any plans about this?) I'd recommend LLVM 3.4.2.

Roman

Am Donnerstag, den 08.01.2015 um 11:03 +0100 schrieb Erik Joelsson:
 Hello Roman,
 
 This addition looks good to me.
 
 Thinking about what the others said, it might be inconvenient to have 
 all this be pushed to different forests. I tried applying all the 
 changes to jdk9/hs-rt, but I can't seem to build zeroshark.. Did you 
 have more hotspot changes to be able to finish the build?
 
 My failure is:
 
   ciTypeFlow.o 
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/ci/ciTypeFlow.cpp
 In file included from 
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/regmask.hpp:29:0,
   from 
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/compile.hpp:40,
   from 
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/ci/ciTypeFlow.cpp:38:
 /localhome/hg/jdk9-hs-rt/hotspot/src/share/vm/opto/optoreg.hpp:40:39: 
 fatal error: adfiles/adGlobals_zero.hpp: No such file or directory
 
  From what I can see, adfiles are not generated for zero or zeroshark 
 builds, so the include should probably be removed.
 
 Would you still like me to push what you currently have to hs-rt?
 
 /Erik
 
 On 2015-01-07 21:21, Roman Kennke wrote:
  Hi Erik,
 
  When I built Zero and Shark on my Raspberry Pi, I noticed another
  problem when copying jvm.cfg into the right places. I fixed it in a
  similar way as I did for the SA stuff:
 
  http://cr.openjdk.java.net/~rkennke/shark-build-jdk/webrev.02/
 
  I think that should be all for now.
 
  Please push that into JDK9 if you think that's fine.
 
  Best regards,
  Roman
 
  Am Mittwoch, den 07.01.2015 um 17:49 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:29, Roman Kennke wrote:
  Am Mittwoch, den 07.01.2015 um 17:16 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:11, Roman Kennke wrote:
  Hi Erik,
 
  Do you have a bug for this?
  No.
 
  I haven't pushed any changes to JDK in a while. Is it possible in the
  meantime for me to create my own bugs? Otherwise, please file one for
  me :-)
  You should be able to log in to https://bugs.openjdk.java.net and create
  bugs since you have an OpenJDK identity.
  Done:
 
  https://bugs.openjdk.java.net/browse/JDK-8068598
 
  While I'm at it, is it possible for me to push my own changes (except
  hotspot of course)? If yes, what needs to be done for regenerating the
  configure files? Simply run autogen.sh in common/autoconf with whatever
  version of autotools I have? Or doesn't it make sense at all b/c you
  need to regenerate your closed scripts?
  It requires you to run common/autogen.sh yes, and that will require you
  to have autoconf 2.69 installed. But since we also need to regenerate
  the closed version, I can take care of the push for you. Will do it
  tomorrow if that's ok?
 
  /Erik
  Roman
 
 
 




Re: [PATCH] Fix Shark build in JDK9

2015-01-07 Thread Roman Kennke
Hi Erik,

When I built Zero and Shark on my Raspberry Pi, I noticed another
problem when copying jvm.cfg into the right places. I fixed it in a
similar way as I did for the SA stuff:

http://cr.openjdk.java.net/~rkennke/shark-build-jdk/webrev.02/

I think that should be all for now.

Please push that into JDK9 if you think that's fine.

Best regards,
Roman

Am Mittwoch, den 07.01.2015 um 17:49 +0100 schrieb Erik Joelsson:
 On 2015-01-07 17:29, Roman Kennke wrote:
  Am Mittwoch, den 07.01.2015 um 17:16 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:11, Roman Kennke wrote:
  Hi Erik,
 
  Do you have a bug for this?
  No.
 
  I haven't pushed any changes to JDK in a while. Is it possible in the
  meantime for me to create my own bugs? Otherwise, please file one for
  me :-)
  You should be able to log in to https://bugs.openjdk.java.net and create
  bugs since you have an OpenJDK identity.
  Done:
 
  https://bugs.openjdk.java.net/browse/JDK-8068598
 
  While I'm at it, is it possible for me to push my own changes (except
  hotspot of course)? If yes, what needs to be done for regenerating the
  configure files? Simply run autogen.sh in common/autoconf with whatever
  version of autotools I have? Or doesn't it make sense at all b/c you
  need to regenerate your closed scripts?
 It requires you to run common/autogen.sh yes, and that will require you 
 to have autoconf 2.69 installed. But since we also need to regenerate 
 the closed version, I can take care of the push for you. Will do it 
 tomorrow if that's ok?
 
 /Erik
  Roman
 
 




Re: [PATCH] Fix Shark build in JDK9

2015-01-07 Thread Roman Kennke
Hi Erik,

  I made some fixes to the build machinery to be able to build Shark:
 
  http://cr.openjdk.java.net/~rkennke/shark-build-top/
 Looks fine, but the generated configure script needs to be generated by 
 the script as it also updates a timestamp inside the file. We will still 
 need to update the closed version of the generated script in sync with 
 this. I will be happy to push both when review has passed if you like.

That would be great.

  http://cr.openjdk.java.net/~rkennke/shark-build-hotspot/
 That looks like a simple typo. Looks good to me. This is in hotspot 
 however so will need to go through a hotspot forest and requires 2 
 reviewers.

ok, I will finish my code-changes to hotspot and propose the whole bunch
to hotspot-dev separately as you suggested.

  http://cr.openjdk.java.net/~rkennke/shark-build-jdk/
 Is the contents of the conditionals for SERVER, ZERO and ZEROSHARK the 
 exact same? Perhaps change into one conditional like this?

Please have a look at the updated simpler patch I posted right after the
first one:

http://cr.openjdk.java.net/~rkennke/shark-build-jdk/webrev.01/

Regards,
Roman




Re: [PATCH] Fix Shark build in JDK9

2015-01-07 Thread Roman Kennke
Am Mittwoch, den 07.01.2015 um 17:49 +0100 schrieb Erik Joelsson:
 On 2015-01-07 17:29, Roman Kennke wrote:
  Am Mittwoch, den 07.01.2015 um 17:16 +0100 schrieb Erik Joelsson:
  On 2015-01-07 17:11, Roman Kennke wrote:
  Hi Erik,
 
  Do you have a bug for this?
  No.
 
  I haven't pushed any changes to JDK in a while. Is it possible in the
  meantime for me to create my own bugs? Otherwise, please file one for
  me :-)
  You should be able to log in to https://bugs.openjdk.java.net and create
  bugs since you have an OpenJDK identity.
  Done:
 
  https://bugs.openjdk.java.net/browse/JDK-8068598
 
  While I'm at it, is it possible for me to push my own changes (except
  hotspot of course)? If yes, what needs to be done for regenerating the
  configure files? Simply run autogen.sh in common/autoconf with whatever
  version of autotools I have? Or doesn't it make sense at all b/c you
  need to regenerate your closed scripts?
 It requires you to run common/autogen.sh yes, and that will require you 
 to have autoconf 2.69 installed. But since we also need to regenerate 
 the closed version, I can take care of the push for you. Will do it 
 tomorrow if that's ok?

Sure, please go ahead and push the top-level and JDK changes.

Thanks a lot!

Roman




Re: [PATCH] Fix Shark build in JDK9

2015-01-07 Thread Roman Kennke
Am Mittwoch, den 07.01.2015 um 17:16 +0100 schrieb Erik Joelsson:
 On 2015-01-07 17:11, Roman Kennke wrote:
  Hi Erik,
 
  Do you have a bug for this?
  No.
 
  I haven't pushed any changes to JDK in a while. Is it possible in the
  meantime for me to create my own bugs? Otherwise, please file one for
  me :-)
 You should be able to log in to https://bugs.openjdk.java.net and create 
 bugs since you have an OpenJDK identity.

Done:

https://bugs.openjdk.java.net/browse/JDK-8068598

While I'm at it, is it possible for me to push my own changes (except
hotspot of course)? If yes, what needs to be done for regenerating the
configure files? Simply run autogen.sh in common/autoconf with whatever
version of autotools I have? Or doesn't it make sense at all b/c you
need to regenerate your closed scripts?

Roman



Re: [PATCH] Fix Shark build in JDK9

2015-01-07 Thread Roman Kennke
Hi Erik,

  I made some fixes to the build machinery to be able to build Shark:
 
  http://cr.openjdk.java.net/~rkennke/shark-build-top/
  Looks fine, but the generated configure script needs to be generated by
  the script as it also updates a timestamp inside the file. We will still
  need to update the closed version of the generated script in sync with
  this. I will be happy to push both when review has passed if you like.
  That would be great.
 
  http://cr.openjdk.java.net/~rkennke/shark-build-hotspot/
  That looks like a simple typo. Looks good to me. This is in hotspot
  however so will need to go through a hotspot forest and requires 2
  reviewers.
  ok, I will finish my code-changes to hotspot and propose the whole bunch
  to hotspot-dev separately as you suggested.
 Sounds like a good idea, thanks!
  http://cr.openjdk.java.net/~rkennke/shark-build-jdk/
  Is the contents of the conditionals for SERVER, ZERO and ZEROSHARK the
  exact same? Perhaps change into one conditional like this?
  Please have a look at the updated simpler patch I posted right after the
  first one:
 
  http://cr.openjdk.java.net/~rkennke/shark-build-jdk/webrev.01/
 That indeed looks better.
 
 Do you have a bug for this?

No.

I haven't pushed any changes to JDK in a while. Is it possible in the
meantime for me to create my own bugs? Otherwise, please file one for
me :-)

Regards,
Roman




[PATCH] Fix Shark build in JDK9

2015-01-07 Thread Roman Kennke
Hello there,

I made some fixes to the build machinery to be able to build Shark:

http://cr.openjdk.java.net/~rkennke/shark-build-top/
http://cr.openjdk.java.net/~rkennke/shark-build-hotspot/
http://cr.openjdk.java.net/~rkennke/shark-build-jdk/

In particular, it does:
- Improve the sed command to figure out the LLVM version. LLVM changed
its version string from x.y to x.y.z. The command now correctly strips
x.y.z to xy, we're only interested in the first two numbers. I also
hand-changed the generated configure, please forgive me ;-)
- In hotspot's makefile, I fixed a typo, allowing to build with unzipped
debug info.
- In JDK's build, I added the ZEROSHARK variant to exclude the SA
generation.

Notice that this alone doesn't make Shark build and run fine, it still
requires some code changes. Those are not related to build-dev though, I
will post them to the appropriate lists asap.

Can you please review the changes? I would like to push this to
build-infra/jdk9.

Regards,
Roman



Re: Support building zero with the new build

2013-04-03 Thread Roman Kennke
Yup, fine for me as well! (I'm not a reviewer though... just an opinion)

Roman

Am Mittwoch, den 03.04.2013, 10:28 +0200 schrieb Erik Joelsson:
 This looks good to me. Thanks!
 
 /Erik
 
 On 2013-04-02 23:17, Omair Majid wrote:
  Hi,
 
  Updated patch at:
  http://cr.openjdk.java.net/~omajid/webrevs/zero-newbuild/02/
 
  On 04/02/2013 05:21 AM, Erik Joelsson wrote:
  I know it's not always obvious where things belong but I would put the
  INCLUDE_SA logic in jdk-options.m4 somewhere close after the JVM_VARIANT
  variables have been assigned.
  How about right after it? :)
 
  Thanks,
  Omair
 




Re: Support building zero with the new build

2013-03-27 Thread Roman Kennke
Am Mittwoch, den 27.03.2013, 09:55 +0100 schrieb Erik Joelsson:
 Work on this was already initiated in this thread:
 
 http://mail.openjdk.java.net/pipermail/build-infra-dev/2013-February/003059.html
 
 Would be good if these efforts could be synchronized.
 
 I like that you define and translate these hotspot specific variables in 
 hotspot-spec.gmk. I don't think we need to change any variables in 
 hotspot for this to work as was done in the other thread. ZERO_ARCHDEF 
 should maybe belong to platform.m4, not sure, but might as well be 
 defined where you have it. However, checking for libraries would be 
 better moved to libraries.m4 I think.

Yeah, sorry that I did not follow up on 'my' thread. It kindof fizzled
out. What's blocking the patches from going in?

http://cr.openjdk.java.net/~rkennke/zero_build_infra_hotspot/webrev.01/

I think the JDK part needs to be revisited now that the way JARs are
built has changed, but this can be done independently:

http://cr.openjdk.java.net/~rkennke/zero-build-infra-jdk/webrev.01/

Roman





Re: [OpenJDK 2D-Dev] _LITTLE_ENDIAN

2009-06-23 Thread Roman Kennke

Hi,

first of all, I think this is a LCMS problem, so we should think about 
how to fix this _upstream_, otherwise we end up maintaining a patched 
version of lcms *shudder*.


Then I don't think it's a good idea to depend on such 'internal' 
defines. One OS defines or not _LITTLE_ENDIAN, others (like VxWorks) 
define it either as 0 or 1, depending on the systems, even others don't 
care and define __LITTLE_ENDIAN or whatever. IMO, a better way to solve 
this is to make LCMS check LCMS_LITTLE_ENDIAN, and define it somewhere. 
Configure based builds would figure this out when running configure. 
OpenJDK would set this in the files mentioned below, instead of setting 
_LITTLE_ENDIAN. AFAICS, this is the only way to make sure we don't 
conflict with any internal settings of some OS include or compiler.



For better or worse, there seems to be
./common/Defs-linux.gmk:CFLAGS_REQUIRED_amd64   += 
-fno-omit-frame-pointer -D_LITTLE_ENDIAN
./common/Defs-linux.gmk:CFLAGS_REQUIRED_i586+= 
-fno-omit-frame-pointer -D_LITTLE_ENDIAN
./common/Defs-linux.gmk:CFLAGS_REQUIRED_ia64+= 
-fno-omit-frame-pointer -D_LITTLE_ENDIAN
./common/Defs-solaris.gmk:  # The macro _LITTLE_ENDIAN needs to be 
defined the same to avoid the
./common/Defs-solaris.gmk:  #   Sun C compiler warning message: warning: 
macro redefined: _LITTLE_ENDIAN
./common/Defs-solaris.gmk:  CPPFLAGS_COMMON +=  -DcpuIntel 
-D_LITTLE_ENDIAN= -D$(LIBARCH)

./common/Defs-windows.gmk:CPPFLAGS_COMMON = -DWIN32 -DIAL -D_LITTLE_ENDIAN
./netbeans/awt2d/README:D3D_OVERLOADS; UNICODE; _UNICODE; WIN32; 
IAL; _LITTLE_ENDIAN; WIN32; _X86_;



/Roman


Re: [OpenJDK 2D-Dev] _LITTLE_ENDIAN

2009-06-23 Thread Roman Kennke

Hi,


Someone would need to make darn sure that ALL uses of the old
name have been changed. I counted 362 references to this variable
in the openjdk7 repositories, I assume a similar number in openjdk6.
The closed jdk sources have an additional 13 references to this
_LITTLE_ENDIAN name Sun would need to change too.

Do you really want to change 362+ lines for this name change?


Surely not. I haven't counted and I blindly assumed we talk about lcms 
only here. Sorry. In this case I would revert to #undef _LITTLE_ENDIAN 
or -U_LITTLE_ENDIAN on platforms that do crazy stuff. No code changes 
needed then...


/Roman



-kto


Roman Kennke wrote:

Hi,

first of all, I think this is a LCMS problem, so we should think about 
how to fix this _upstream_, otherwise we end up maintaining a patched 
version of lcms *shudder*.


Then I don't think it's a good idea to depend on such 'internal' 
defines. One OS defines or not _LITTLE_ENDIAN, others (like VxWorks) 
define it either as 0 or 1, depending on the systems, even others 
don't care and define __LITTLE_ENDIAN or whatever. IMO, a better way 
to solve this is to make LCMS check LCMS_LITTLE_ENDIAN, and define it 
somewhere. Configure based builds would figure this out when running 
configure. OpenJDK would set this in the files mentioned below, 
instead of setting _LITTLE_ENDIAN. AFAICS, this is the only way to 
make sure we don't conflict with any internal settings of some OS 
include or compiler.



For better or worse, there seems to be
./common/Defs-linux.gmk:CFLAGS_REQUIRED_amd64   += 
-fno-omit-frame-pointer -D_LITTLE_ENDIAN
./common/Defs-linux.gmk:CFLAGS_REQUIRED_i586+= 
-fno-omit-frame-pointer -D_LITTLE_ENDIAN
./common/Defs-linux.gmk:CFLAGS_REQUIRED_ia64+= 
-fno-omit-frame-pointer -D_LITTLE_ENDIAN
./common/Defs-solaris.gmk:  # The macro _LITTLE_ENDIAN needs to be 
defined the same to avoid the
./common/Defs-solaris.gmk:  #   Sun C compiler warning message: 
warning: macro redefined: _LITTLE_ENDIAN
./common/Defs-solaris.gmk:  CPPFLAGS_COMMON +=  -DcpuIntel 
-D_LITTLE_ENDIAN= -D$(LIBARCH)
./common/Defs-windows.gmk:CPPFLAGS_COMMON = -DWIN32 -DIAL 
-D_LITTLE_ENDIAN
./netbeans/awt2d/README:D3D_OVERLOADS; UNICODE; _UNICODE; WIN32; 
IAL; _LITTLE_ENDIAN; WIN32; _X86_;



/Roman




Re: Swing Dev Request for review: Bug 100054: Make building the Nimbus look 'n' feel optional

2009-05-15 Thread Roman Kennke
Hi,

 and don't like the idea of being able to disable Nimbus
 because of this dependency.

Too many negations and ablebables for my parser... Oops. ;-)

/Roman
-- 
Dipl.-Inform. (FH) Roman Kennke, Software Engineer, http://kennke.org
aicas Allerton Interworks Computer Automated Systems GmbH
Haid-und-Neu-Straße 18 * D-76131 Karlsruhe * Germany
http://www.aicas.com   * Tel: +49-721-663 968-48
USt-Id: DE216375633, Handelsregister HRB 109481, AG Karlsruhe
Geschäftsführer: Dr. James J. Hunt




Re: Building on ubuntu 8.04

2008-05-14 Thread Roman Kennke
Also, maybe you simply forgot make clean after you fixed your shell?

/Roman

Am Mittwoch, den 14.05.2008, 12:56 -0700 schrieb Brad Wetmore:
 I think you've got a problem still with your sh setup, or possibly 
 something else.  IIRC, the ?replType? variables is supposed to be 
 replaced with various values depending on whether you have an encoder or 
 decoder.  (sed/awk preprocessing = a Javaman's #ifdef equivalent)  It's 
 done a similar way for the {Byte,Char,Int...}Buffers.
 
 Check jdk/make/java/nio, specifically Makefile, spp.sh, and genCoder.sh
 
 Hope this helps.
 
 Brad
 
 
 Bram Somers wrote:
  Dear all,
  
  I've tried building jdk7 on Ubuntu 8.04. I'm aware of the problems with 
  sh/dash and I remove sh and made a symlink of sh to bash, so normally I 
  shouldn't get this error anymore:
  
  jdk7/build/openjdk_full_debug/gensrc/java/nio/charset/CharsetEncoder.java:142:
   
  cannot find symbol
  symbol  : class $replType$
  location: class java.nio.charset.CharsetEncoder
 private $replType$ replacement;
 ^
  /home/bsomers/workspaces/openjdk/jdk7/build/openjdk_full_debug/gensrc/java/nio/charset/CharsetEncoder.java:185:
   
  cannot find symbol
  symbol  : class $replType$
  location: class java.nio.charset.CharsetEncoder
$replType$ replacement)
^
  /home/bsomers/workspaces/openjdk/jdk7/build/openjdk_full_debug/gensrc/java/nio/charset/CharsetEncoder.java:246:
   
  cannot find symbol
  symbol  : class $replType$
  location: class java.nio.charset.CharsetEncoder
 public final $replType$ replacement() {
  ^
  /home/bsomers/workspaces/openjdk/jdk7/build/openjdk_full_debug/gensrc/java/nio/charset/CharsetEncoder.java:275:
   
  cannot find symbol
  symbol  : class $replType$
  location: class java.nio.charset.CharsetEncoder
 public final CharsetEncoder replaceWith($replType$ newReplacement) {
 ^
  /home/bsomers/workspaces/openjdk/jdk7/build/openjdk_full_debug/gensrc/java/nio/charset/CharsetEncoder.java:301:
   
  cannot find symbol
  symbol  : class $replType$
  location: class java.nio.charset.CharsetEncoder
 protected void implReplaceWith($replType$ newReplacement) {
^
  Note: Some input files use or override a deprecated API.
  Note: Recompile with -Xlint:deprecation for details.
  Note: Some input files use unchecked or unsafe operations.
  Note: Recompile with -Xlint:unchecked for details.
  5 errors
  
  
  Any more suggestions?
  
  Thanks in advance,
  
  kind regards,
  
  Bram
-- 
Dipl.-Inform. (FH) Roman Kennke, Software Engineer, http://kennke.org
aicas Allerton Interworks Computer Automated Systems GmbH
Haid-und-Neu-Straße 18 * D-76131 Karlsruhe * Germany
http://www.aicas.com   * Tel: +49-721-663 968-0
USt-Id: DE216375633, Handelsregister HRB 109481, AG Karlsruhe
Geschäftsführer: Dr. James J. Hunt



Re: Build problems on Ubuntu 8.04

2008-04-21 Thread Roman Kennke
Hi,

Am Montag, den 21.04.2008, 19:10 +0200 schrieb Thorbjørn Ravn Andersen:
 Kelly O'Hair skrev  den 21-04-2008 18:11:
 
 
  Roman Kennke wrote:
  Hi there,
 
  I've just setup a box with the soon-to-be-released Ubuntu 8.04, and see
  a number of problems:
 
  Minor: make sanity does not check for gawk and c++, although the build
  requires them. Maybe there are some more of these, but this is what I
  found.
 
 
  We never have done sanity checks on all the unix command tools.
  The sanity checks on the compilers usually involved finding the version
  number and verifying it, but even then it might only result in a warning.
  Over the past few years, most of the fatal sanity checks have been
  relaxed to warnings to allow for maximum flexibility. Most compilers
  and tools between Solaris, Windows, Linux, etc. are rather inconsistent
  when it comes to getting their version string.
  But I suppose we could always check for at least their existence.
 
  Verifying the Linux package list is very difficult to maintain for all 
  the
  variations of Linux.
 
  I'm open to solutions here.
 Would a reasonable approach be setting up a configure script (GNU 
 autoconf) to detect the appropriate things that will work (i.e. using 
 bash instead of dash, is gawk or awk available) etc, and then use the 
 output from that?

I think that's a great idea, and we could look at what IcedTea does in
this respect...

Cheers, Roman

-- 
Dipl.-Inform. (FH) Roman Kennke, Software Engineer, http://kennke.org
aicas Allerton Interworks Computer Automated Systems GmbH
Haid-und-Neu-Straße 18 * D-76131 Karlsruhe * Germany
http://www.aicas.com   * Tel: +49-721-663 968-0
USt-Id: DE216375633, Handelsregister HRB 109481, AG Karlsruhe
Geschäftsführer: Dr. James J. Hunt



Re: [jdk]: Missing classes from the java.awt.color package

2008-03-31 Thread Roman Kennke
Hi Francis,


 I am building OpenJDK on WXP SP2 with source trees obtain from the Mercurial 
 repository and got lot of compile errors on  missing classes from the 
 java.awt.color package like java.awt.color.ColorSpace.
 
 Did I miss something??

Yeah, you need the matching binary blobs from:

http://download.java.net/openjdk/jdk7/

These contain a (relatively) small bunch of pieces that are not (yet)
open-sourced.

/Roman

-- 
Dipl.-Inform. (FH) Roman Kennke, Software Engineer, http://kennke.org
aicas Allerton Interworks Computer Automated Systems GmbH
Haid-und-Neu-Straße 18 * D-76131 Karlsruhe * Germany
http://www.aicas.com   * Tel: +49-721-663 968-0
USt-Id: DE216375633, Handelsregister HRB 109481, AG Karlsruhe
Geschäftsführer: Dr. James J. Hunt



Make target for compiling java classes only?

2008-01-10 Thread Roman Kennke
Hi there,

I'm looking for a make target, that compiles only the Java classes of
OpenJDK. Is there a way to do that? Thanks,

/Roman

-- 
http://kennke.org/blog/
-- 
Dipl.-Inform. (FH) Roman Kennke, Software Engineer, http://kennke.org
aicas Allerton Interworks Computer Automated Systems GmbH
Haid-und-Neu-Straße 18 * D-76131 Karlsruhe * Germany
http://www.aicas.com   * Tel: +49-721-663 968-0
USt-Id: DE216375633, Handelsregister HRB 109481, AG Karlsruhe
Geschäftsführer: Dr. James J. Hunt



Re: Group forests created

2007-11-08 Thread Roman Kennke
Hi Mark,

 I've created and loaded a set of experimental group integration forests
 on hg.openjdk.java.net.  These are the forests to which individual
 developers will push their changes.  Group integrators are responsible
 for periodically syncing with the master forest and pushing changes up
 to the master after suitable testing.

Great news! Thank you so much.

/Roman

-- 
Dipl.-Inform. (FH) Roman Kennke, Software Engineer, http://kennke.org
aicas Allerton Interworks Computer Automated Systems GmbH
Haid-und-Neu-Straße 18 * D-76131 Karlsruhe * Germany
http://www.aicas.com   * Tel: +49-721-663 968-0
USt-Id: DE216375633, Handelsregister HRB 109481, AG Karlsruhe
Geschäftsführer: Dr. James J. Hunt



Re: sun.org.mozilla?

2007-11-05 Thread Roman Kennke
Still no thoughts about this? The b23 drop also doesn't have such
classes, and they are not in the built JAR either. I suspect, OpenJDK is
building against the classes from the boot JDK, but doesn't actually
ship these classes themselves. This would lead to javax.script not
working correctly I guess.

/Roman

Am Freitag, den 26.10.2007, 11:45 +0200 schrieb Roman Kennke:
 Hi there,
 
 where are the sources or classes for the sun.org.mozilla.* packages?
 They are referenced from com.sun.script.* but I can't find them nowhere
 in the source tree. Any hints?
 
 /Roman
 
-- 
Dipl.-Inform. (FH) Roman Kennke, Software Engineer, http://kennke.org
aicas Allerton Interworks Computer Automated Systems GmbH
Haid-und-Neu-Straße 18 * D-76131 Karlsruhe * Germany
http://www.aicas.com   * Tel: +49-721-663 968-0
USt-Id: DE216375633, Handelsregister HRB 109481, AG Karlsruhe
Geschäftsführer: Dr. James J. Hunt



sun.org.mozilla?

2007-10-26 Thread Roman Kennke
Hi there,

where are the sources or classes for the sun.org.mozilla.* packages?
They are referenced from com.sun.script.* but I can't find them nowhere
in the source tree. Any hints?

/Roman

-- 
Dipl.-Inform. (FH) Roman Kennke, Software Engineer, http://kennke.org
aicas Allerton Interworks Computer Automated Systems GmbH
Haid-und-Neu-Straße 18 * D-76131 Karlsruhe * Germany
http://www.aicas.com   * Tel: +49-721-663 968-0
USt-Id: DE216375633, Handelsregister HRB 109481, AG Karlsruhe
Geschäftsführer: Dr. James J. Hunt