Re: RFR(M) 8152988: [AOT] Update test batch definitions to include aot-ed java.base module mode into hs-comp testing

2018-11-01 Thread Ekaterina Pavlova

yes, it is at the same location:
 http://cr.openjdk.java.net/~epavlova//8152988/webrev.01/

-katya

On 11/1/18 3:15 PM, Magnus Ihse Bursie wrote:

Do you have an updated webrev?

/Magnus


1 nov. 2018 kl. 22:28 skrev Ekaterina Pavlova :

Magnus,

thanks for the review.
I fixed everything you pointed to.

thanks,
-katya


On 11/1/18 6:46 AM, Magnus Ihse Bursie wrote:
Hi Katya,
Sorry for the late response. :-(

On 2018-10-29 21:24, Ekaterina Pavlova wrote:
Vladimir,
I added "-XX:+UseAOTStrictLoading" to "java -version", see make/RunTests.gmk.
Also added "--compile-with-assertion" to jaotc flags in case "-ea" flag was 
present in vm options
(otherwise AOTed library will warn that it does not have java assertions in 
compiled code).

This code:
+ ifneq ($$(findstring -ea, $$($1_VM_OPTIONS)), )
Using findstring is somewhat brittle here, since it will find substrings as 
well. Better to use filter which will match an exact word.


I will add -XX:+UseAOTStrictLoading and "-Xlog:aot+class+fingerprint=trace 
-Xlog:aot+class+load=trace"
directly to AOT testing tasks in mach5 jobs.

Magnus,
I changed
  "$$(addprefix -vmoption:, $$($1_AOT_OPTIONS))"
to
+  ifneq ($$($1_AOT_OPTIONS), )
+$1_JTREG_BASIC_OPTIONS += -vmoptions:"$$($1_AOT_OPTIONS)"
+  endif

Please review updated webrev:
http://cr.openjdk.java.net/~epavlova//8152988/webrev.01/

Some fallout from my and Erik's discussion still needs implementing in 
RunTests.gmk:
$$(addprefix --compile-commands , $$($1_AOT_CCLIST)) \
Please rewrite as:
$$(addprefix --compile-commands$(SPACE), $$($1_AOT_CCLIST)) \
I'm still not happy with the always printed "java -version". :( Vladimir agreed 
that it would be reasonable to redirect this into a file. This can be done by e.g.
+ $$(call ExecuteWithLog, $$@.check, \
+ $$(FIXPATH) $$(JDK_IMAGE_DIR)/bin/java \
+ $$($1_VM_OPTIONS) -XX:+PrintAOT -XX:+UseAOTStrictLoading -XX:AOTLibrary=$$@ 
-version \
+ > $$@.verify-aot
+ )
This will redirect the output to $($1_AOT_LIB).verify-aot in test-support/aot.
I still would like to see the if statement around SetupAot moved out, like this:
+ ifneq ($$(GTEST_AOT_MODULES), ) + $$(eval $$(call SetupAot, $1, \
+ MODULES := $$(GTEST_AOT_MODULES), \
+ VM_OPTIONS := $$(GTEST_VM_OPTIONS) $$(GTEST_JAVA_OPTIONS), \
+ )) + endif
and similarly for JTREG. This is coupled with removing the "ifneq ($$($1_MODULES), 
)" in SetupAotBody (and do not forget to un-indent two spaces).
/Magnus


thanks,
-katya



On 10/23/18 4:40 PM, Vladimir Kozlov wrote:

On 10/22/18 9:36 AM, Ekaterina Pavlova wrote:
Vladimir,

thanks a lot for the review.
"-XX:+PrintAOT" was only passed to "java -version" which is done before the 
testing to verify that AOTed libraries work.


If it is only for -version then using  -XX:+PrintAOT is okay.


It also perhaps could be useful debug info in case some tests starts fail.
But I can change -XX:+PrintAOT to -XX:+UseAOTStrictLoading if you think it is 
more than enough.


-XX:+UseAOTStrictLoading is more important for tests which have different flags 
specified and may invalidate AOTLibrary configuration. We would want to catch 
tests which will not use AOT libraries. At least when we test these your 
changes.



Do you suggest to add "-Xlog:aot+class+fingerprint=trace -Xlog:aot+class+load=trace" to 
"java -version" only
or to java flags used during tests execution as well?


These options are next step after -XX:+UseAOTStrictLoading. We were asked 
before how we can guarantee that AOTed methods are used by tests. One way is 
PrintAOT which should show published AOT methods. An other way is Xlogs flags 
which have less output but provide at least some level of information that 
AOTed classes are used and not skipped.

Thanks,
Vladimir




-katya


On 10/19/18 10:40 AM, Vladimir Kozlov wrote:
I share concern about output of -XX:+PrintAOT - it prints all AOT classes and 
methods. For AOTed java.base the output will be almost the same for all tests 
(depends which are java.base classes are used).

I would suggest instead to use -XX:+UseAOTStrictLoading which exit(1) VM if it 
can't load AOT library for some reason (mismatched configuration - different 
GCs).

Also -Xlog:aot+class+fingerprint=trace  to trace cases of mismatching class 
fingerprint.
Also -Xlog:aot+class+load=trace -to trace AOT class loading or skipping them if 
they are not matching.

And I agree to redirect this into file.

Thanks,
Vladimir


On 10/19/18 9:04 AM, Erik Joelsson wrote:
Hello,

I will answer the parts I'm responsible for.


On 2018-10-19 00:21, Magnus Ihse Bursie wrote:

In RunTests.gmk, the following line is repeated:
# Note, this could not be done during JDK build time.

In the options:
$1_JAOTC_OPTS := \
-J-Xmx4g --info \
-Xmx4g is huge, much larger than what we typically use. Is this intentional? 
This will risk crashing at machines with too little free memory. Will AOT fail 
to work with less memory?


There's an extra space before the comma here:
$$(addprefix --compile-commands , $$($1_AOT_CCLIST)) \

Re: RFR(S): remove ExecuteInternalVMTests and VerboseInternalVMTests flags

2018-11-01 Thread Erik Joelsson

Looks good.

/Erik


On 2018-11-01 16:37, Igor Ignatyev wrote:
on a second though, removing these macros isn't that big, here is an 
incremental webrev: 
http://cr.openjdk.java.net/~iignatyev//8213058/webrev.0-1 
. 
besides removing test_log, it also include fixes in build and doc 
needed due to rebasing.


Erik, could you please re-review build changes?

http://cr.openjdk.java.net/~iignatyev//8213058/webrev.01/ 
 is the 
whole webrev.


Thanks,
-- Igor


On Nov 1, 2018, at 4:23 PM, Igor Ignatyev > wrote:


Hi David,

removing usage of test_log will just mix "unneeded" changes w/ this 
clean up. as TestReservedSpace_test, TestReserveMemorySpecial_test, 
TestVirtualSpace_test, and TestMetaspaceUtils_test are to be removed 
by 8213269[*], I don't think we need to pay much attention to their code.


[*] https://bugs.openjdk.java.net/browse/JDK-8213269 



-- Igor

On Nov 1, 2018, at 4:16 PM, David Holmes > wrote:


Hi Igor,

There's no point having empty test_log macros that do nothing. The 
macro and all uses should just be deleted ... unless you plan on 
adding some other form of logging for this?


Thanks,
David

On 2/11/2018 7:15 AM, Igor Ignatyev wrote:
http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html 


174 lines changed: 0 ins; 170 del; 4 mod;

Hi all,
could you please review this small clean up which removes 
ExecuteInternalVMTests and VerboseInternalVMTests flags and related 
make targets and tests?
8177708[1-3] is to convert the last of internal vm tests, so the 
whole InternalVMTests can be removed.

JBS: https://bugs.openjdk.java.net/browse/JDK-8213058
webrev: 
http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html

testing: tier1, build all regular platforms
[1] https://bugs.openjdk.java.net/browse/JDK-8177708
[2] http://cr.openjdk.java.net/~iignatyev//8177708/webrev.00/index.html
[3] 
http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-October/030633.html

Thanks,
-- Igor








Re: Stop using precompiled headers for Linux?

2018-11-01 Thread David Holmes

On 2/11/2018 9:53 AM, Ioi Lam wrote:
Maybe precompiled.hpp can be periodically (weekly?) updated by a robot, 
which parses the dependencies files generated by gcc, and pick the most 
popular N files?


Do we need to use precompiled.hpp at all? Can we not use the list of 
files contained in precompiled.hpp and precompile them individually?


Might be an interesting experiment.

David


- Ioi


On 11/1/18 4:38 PM, David Holmes wrote:
It's not at all obvious to me that the way we use PCH is the 
right/best way to use it. We dump every header we think it would be 
good to precompile into precompiled.hpp and then only ask gcc to 
precompile it. That results in a ~250MB file that has to be read into 
and processed for every source file! That doesn't seem very efficient 
to me.


Cheers,
David

On 2/11/2018 3:18 AM, Erik Joelsson wrote:

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem to 
lose just as much real compile time. The big difference in these 
tests was rather the number of cpus in the machine (32 threads in the 
linux box vs 8 on the mac). The total amount of work done was 
increased when PCH was disabled, that's the user time. Here is my 
theory on why the real (wall clock) time was not consistent with user 
time between these experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the 
time saved in 2. The potential time saved in 2 goes down as the 
number of cpus go up. I'm pretty sure that if I repeated the 
experiment on Linux on a smaller box (typically one we use in CI), 
the results would look similar to Macosx, and similarly, if I had 
access to a much bigger mac, it would behave like the big Linux box. 
This is why I'm saying this should be done for both or none of these 
platforms.


In addition to this, the experiment only built hotspot. If you we 
would instead build the whole JDK, then the time wasted in 1 in the 
PCH case would be negated to a large extent by other build targets 
running concurrently, so for a full build, PCH is still providing value.


The question here is that if the value of PCH isn't very big, perhaps 
it's not worth it if it's also creating as much grief as described 
here. There is no doubt that there is value however. And given the 
examination done by Magnus, it seems this value could be increased.


The main reason why we haven't disabled PCH in CI before this. We 
really really want to get CI builds fast. We don't have a ton of over 
capacity to just throw at it. PCH made builds faster, so we used 
them. My other reason is consistency between builds. Supporting 
multiple different modes of building creates the potential for 
inconsistencies. For that reason I would definitely not support 
having PCH on by default, but turned off in our CI/dev-submit. We 
pick one or the other as the official build configuration, and we 
stick with the official build configuration for all builds of any 
official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one 
after the other. The jdk-submit currently only runs tier1. In tier2 
I've put slowdebug builds with PCH disabled, just to help verify a 
common developer configuration. These builds are not meant to be used 
for testing or anything like that, they are just run for 
verification, which is why this is ok. We could argue that it would 
make sense to move the linux-x64-slowdebug without pch build to tier1 
so that it's included in dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 
2013), 3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the 
-npch is without precompiled headers. Here we see a slight 
degradation when disabling on both user time and wall clock time. 
My guess is that the user time increase is about the same, but 
because of a lower cpu count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that 
the precompiled header is generated alone on one core while nothing 
else is happening, which would explain this degradation in build 
speed. If we were instead building the whole product, we would see 
a better correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did 
with ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That 
the gain is small is no 

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Ioi Lam
Maybe precompiled.hpp can be periodically (weekly?) updated by a robot, 
which parses the dependencies files generated by gcc, and pick the most 
popular N files?


- Ioi


On 11/1/18 4:38 PM, David Holmes wrote:
It's not at all obvious to me that the way we use PCH is the 
right/best way to use it. We dump every header we think it would be 
good to precompile into precompiled.hpp and then only ask gcc to 
precompile it. That results in a ~250MB file that has to be read into 
and processed for every source file! That doesn't seem very efficient 
to me.


Cheers,
David

On 2/11/2018 3:18 AM, Erik Joelsson wrote:

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem to 
lose just as much real compile time. The big difference in these 
tests was rather the number of cpus in the machine (32 threads in the 
linux box vs 8 on the mac). The total amount of work done was 
increased when PCH was disabled, that's the user time. Here is my 
theory on why the real (wall clock) time was not consistent with user 
time between these experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the 
time saved in 2. The potential time saved in 2 goes down as the 
number of cpus go up. I'm pretty sure that if I repeated the 
experiment on Linux on a smaller box (typically one we use in CI), 
the results would look similar to Macosx, and similarly, if I had 
access to a much bigger mac, it would behave like the big Linux box. 
This is why I'm saying this should be done for both or none of these 
platforms.


In addition to this, the experiment only built hotspot. If you we 
would instead build the whole JDK, then the time wasted in 1 in the 
PCH case would be negated to a large extent by other build targets 
running concurrently, so for a full build, PCH is still providing value.


The question here is that if the value of PCH isn't very big, perhaps 
it's not worth it if it's also creating as much grief as described 
here. There is no doubt that there is value however. And given the 
examination done by Magnus, it seems this value could be increased.


The main reason why we haven't disabled PCH in CI before this. We 
really really want to get CI builds fast. We don't have a ton of over 
capacity to just throw at it. PCH made builds faster, so we used 
them. My other reason is consistency between builds. Supporting 
multiple different modes of building creates the potential for 
inconsistencies. For that reason I would definitely not support 
having PCH on by default, but turned off in our CI/dev-submit. We 
pick one or the other as the official build configuration, and we 
stick with the official build configuration for all builds of any 
official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one 
after the other. The jdk-submit currently only runs tier1. In tier2 
I've put slowdebug builds with PCH disabled, just to help verify a 
common developer configuration. These builds are not meant to be used 
for testing or anything like that, they are just run for 
verification, which is why this is ok. We could argue that it would 
make sense to move the linux-x64-slowdebug without pch build to tier1 
so that it's included in dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 
2013), 3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the 
-npch is without precompiled headers. Here we see a slight 
degradation when disabling on both user time and wall clock time. 
My guess is that the user time increase is about the same, but 
because of a lower cpu count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that 
the precompiled header is generated alone on one core while nothing 
else is happening, which would explain this degradation in build 
speed. If we were instead building the whole product, we would see 
a better correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did 
with ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That 
the gain is small is no argument to lose it. (I remember a time when 
you were hunting seconds in the build time ;-))


On linux, the story seems different, though. People experience PCH 
as a problem, and there is a net loss of time, at least on 

Re: RFR(S): remove ExecuteInternalVMTests and VerboseInternalVMTests flags

2018-11-01 Thread David Holmes

On 2/11/2018 9:37 AM, Igor Ignatyev wrote:
on a second though, removing these macros isn't that big, here is an 
incremental webrev: 
http://cr.openjdk.java.net/~iignatyev//8213058/webrev.0-1 


:) Thanks. Looks good!

David

. besides 
removing test_log, it also include fixes in build and doc needed due to 
rebasing.


Erik, could you please re-review build changes?

http://cr.openjdk.java.net/~iignatyev//8213058/webrev.01/ 
 is the 
whole webrev.


Thanks,
-- Igor


On Nov 1, 2018, at 4:23 PM, Igor Ignatyev > wrote:


Hi David,

removing usage of test_log will just mix "unneeded" changes w/ this 
clean up. as TestReservedSpace_test, TestReserveMemorySpecial_test, 
TestVirtualSpace_test, and TestMetaspaceUtils_test are to be removed 
by 8213269[*], I don't think we need to pay much attention to their code.


[*] https://bugs.openjdk.java.net/browse/JDK-8213269 



-- Igor

On Nov 1, 2018, at 4:16 PM, David Holmes > wrote:


Hi Igor,

There's no point having empty test_log macros that do nothing. The 
macro and all uses should just be deleted ... unless you plan on 
adding some other form of logging for this?


Thanks,
David

On 2/11/2018 7:15 AM, Igor Ignatyev wrote:
http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html 


174 lines changed: 0 ins; 170 del; 4 mod;

Hi all,
could you please review this small clean up which removes 
ExecuteInternalVMTests and VerboseInternalVMTests flags and related 
make targets and tests?
8177708[1-3] is to convert the last of internal vm tests, so the 
whole InternalVMTests can be removed.

JBS: https://bugs.openjdk.java.net/browse/JDK-8213058
webrev: 
http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html

testing: tier1, build all regular platforms
[1] https://bugs.openjdk.java.net/browse/JDK-8177708
[2] http://cr.openjdk.java.net/~iignatyev//8177708/webrev.00/index.html
[3] 
http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-October/030633.html

Thanks,
-- Igor






Re: Stop using precompiled headers for Linux?

2018-11-01 Thread David Holmes
It's not at all obvious to me that the way we use PCH is the right/best 
way to use it. We dump every header we think it would be good to 
precompile into precompiled.hpp and then only ask gcc to precompile it. 
That results in a ~250MB file that has to be read into and processed for 
every source file! That doesn't seem very efficient to me.


Cheers,
David

On 2/11/2018 3:18 AM, Erik Joelsson wrote:

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem to 
lose just as much real compile time. The big difference in these tests 
was rather the number of cpus in the machine (32 threads in the linux 
box vs 8 on the mac). The total amount of work done was increased when 
PCH was disabled, that's the user time. Here is my theory on why the 
real (wall clock) time was not consistent with user time between these 
experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the time 
saved in 2. The potential time saved in 2 goes down as the number of 
cpus go up. I'm pretty sure that if I repeated the experiment on Linux 
on a smaller box (typically one we use in CI), the results would look 
similar to Macosx, and similarly, if I had access to a much bigger mac, 
it would behave like the big Linux box. This is why I'm saying this 
should be done for both or none of these platforms.


In addition to this, the experiment only built hotspot. If you we would 
instead build the whole JDK, then the time wasted in 1 in the PCH case 
would be negated to a large extent by other build targets running 
concurrently, so for a full build, PCH is still providing value.


The question here is that if the value of PCH isn't very big, perhaps 
it's not worth it if it's also creating as much grief as described here. 
There is no doubt that there is value however. And given the examination 
done by Magnus, it seems this value could be increased.


The main reason why we haven't disabled PCH in CI before this. We really 
really want to get CI builds fast. We don't have a ton of over capacity 
to just throw at it. PCH made builds faster, so we used them. My other 
reason is consistency between builds. Supporting multiple different 
modes of building creates the potential for inconsistencies. For that 
reason I would definitely not support having PCH on by default, but 
turned off in our CI/dev-submit. We pick one or the other as the 
official build configuration, and we stick with the official build 
configuration for all builds of any official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one after 
the other. The jdk-submit currently only runs tier1. In tier2 I've put 
slowdebug builds with PCH disabled, just to help verify a common 
developer configuration. These builds are not meant to be used for 
testing or anything like that, they are just run for verification, which 
is why this is ok. We could argue that it would make sense to move the 
linux-x64-slowdebug without pch build to tier1 so that it's included in 
dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 2013), 
3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the -npch is 
without precompiled headers. Here we see a slight degradation when 
disabling on both user time and wall clock time. My guess is that the 
user time increase is about the same, but because of a lower cpu 
count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that the 
precompiled header is generated alone on one core while nothing else 
is happening, which would explain this degradation in build speed. If 
we were instead building the whole product, we would see a better 
correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did with 
ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That the 
gain is small is no argument to lose it. (I remember a time when you 
were hunting seconds in the build time ;-))


On linux, the story seems different, though. People experience PCH as 
a problem, and there is a net loss of time, at least on selected 
testing machines. It makes sense to turn it off as default, then.


/Magnus



/Erik

macosx-x64
real     4m13.658s
user     27m17.595s
sys     2m11.306s

macosx-x64-npch
real     4m27.823s
user     30m0.434s

Re: RFR(S): remove ExecuteInternalVMTests and VerboseInternalVMTests flags

2018-11-01 Thread Igor Ignatyev
on a second though, removing these macros isn't that big, here is an 
incremental webrev: http://cr.openjdk.java.net/~iignatyev//8213058/webrev.0-1 
. besides 
removing test_log, it also include fixes in build and doc needed due to 
rebasing.

Erik, could you please re-review build changes?

http://cr.openjdk.java.net/~iignatyev//8213058/webrev.01/ 
 is the whole webrev.

Thanks,
-- Igor


> On Nov 1, 2018, at 4:23 PM, Igor Ignatyev  wrote:
> 
> Hi David,
> 
> removing usage of test_log will just mix "unneeded" changes w/ this clean up. 
> as TestReservedSpace_test, TestReserveMemorySpecial_test, 
> TestVirtualSpace_test, and TestMetaspaceUtils_test are to be removed by 
> 8213269[*], I don't think we need to pay much attention to their code.
> 
> [*] https://bugs.openjdk.java.net/browse/JDK-8213269 
> 
> 
> -- Igor
> 
>> On Nov 1, 2018, at 4:16 PM, David Holmes  wrote:
>> 
>> Hi Igor,
>> 
>> There's no point having empty test_log macros that do nothing. The macro and 
>> all uses should just be deleted ... unless you plan on adding some other 
>> form of logging for this?
>> 
>> Thanks,
>> David
>> 
>> On 2/11/2018 7:15 AM, Igor Ignatyev wrote:
>>> http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html
 174 lines changed: 0 ins; 170 del; 4 mod;
>>> Hi all,
>>> could you please review this small clean up which removes 
>>> ExecuteInternalVMTests and VerboseInternalVMTests flags and related make 
>>> targets and tests?
>>> 8177708[1-3] is to convert the last of internal vm tests, so the whole 
>>> InternalVMTests can be removed.
>>> JBS: https://bugs.openjdk.java.net/browse/JDK-8213058
>>> webrev: http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html
>>> testing: tier1, build all regular platforms
>>> [1] https://bugs.openjdk.java.net/browse/JDK-8177708
>>> [2] http://cr.openjdk.java.net/~iignatyev//8177708/webrev.00/index.html
>>> [3] 
>>> http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-October/030633.html
>>> Thanks,
>>> -- Igor
> 



Re: RFR(S): remove ExecuteInternalVMTests and VerboseInternalVMTests flags

2018-11-01 Thread Igor Ignatyev
Hi David,

removing usage of test_log will just mix "unneeded" changes w/ this clean up. 
as TestReservedSpace_test, TestReserveMemorySpecial_test, 
TestVirtualSpace_test, and TestMetaspaceUtils_test are to be removed by 
8213269[*], I don't think we need to pay much attention to their code.

[*] https://bugs.openjdk.java.net/browse/JDK-8213269 


-- Igor

> On Nov 1, 2018, at 4:16 PM, David Holmes  wrote:
> 
> Hi Igor,
> 
> There's no point having empty test_log macros that do nothing. The macro and 
> all uses should just be deleted ... unless you plan on adding some other form 
> of logging for this?
> 
> Thanks,
> David
> 
> On 2/11/2018 7:15 AM, Igor Ignatyev wrote:
>> http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html
>>> 174 lines changed: 0 ins; 170 del; 4 mod;
>> Hi all,
>> could you please review this small clean up which removes 
>> ExecuteInternalVMTests and VerboseInternalVMTests flags and related make 
>> targets and tests?
>> 8177708[1-3] is to convert the last of internal vm tests, so the whole 
>> InternalVMTests can be removed.
>> JBS: https://bugs.openjdk.java.net/browse/JDK-8213058
>> webrev: http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html
>> testing: tier1, build all regular platforms
>> [1] https://bugs.openjdk.java.net/browse/JDK-8177708
>> [2] http://cr.openjdk.java.net/~iignatyev//8177708/webrev.00/index.html
>> [3] 
>> http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-October/030633.html
>> Thanks,
>> -- Igor



Re: RFR(S): remove ExecuteInternalVMTests and VerboseInternalVMTests flags

2018-11-01 Thread David Holmes

Hi Igor,

There's no point having empty test_log macros that do nothing. The macro 
and all uses should just be deleted ... unless you plan on adding some 
other form of logging for this?


Thanks,
David

On 2/11/2018 7:15 AM, Igor Ignatyev wrote:

http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html

174 lines changed: 0 ins; 170 del; 4 mod;


Hi all,

could you please review this small clean up which removes 
ExecuteInternalVMTests and VerboseInternalVMTests flags and related make 
targets and tests?

8177708[1-3] is to convert the last of internal vm tests, so the whole 
InternalVMTests can be removed.

JBS: https://bugs.openjdk.java.net/browse/JDK-8213058
webrev: http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html

testing: tier1, build all regular platforms
[1] https://bugs.openjdk.java.net/browse/JDK-8177708
[2] http://cr.openjdk.java.net/~iignatyev//8177708/webrev.00/index.html
[3] 
http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-October/030633.html

Thanks,
-- Igor



Re: RFR(S): remove ExecuteInternalVMTests and VerboseInternalVMTests flags

2018-11-01 Thread Erik Joelsson

Build changes look good.

/Erik


On 2018-11-01 14:15, Igor Ignatyev wrote:

http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html

174 lines changed: 0 ins; 170 del; 4 mod;

Hi all,

could you please review this small clean up which removes 
ExecuteInternalVMTests and VerboseInternalVMTests flags and related make 
targets and tests?

8177708[1-3] is to convert the last of internal vm tests, so the whole 
InternalVMTests can be removed.

JBS: https://bugs.openjdk.java.net/browse/JDK-8213058
webrev: http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html

testing: tier1, build all regular platforms
[1] https://bugs.openjdk.java.net/browse/JDK-8177708
[2] http://cr.openjdk.java.net/~iignatyev//8177708/webrev.00/index.html
[3] 
http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-October/030633.html

Thanks,
-- Igor




Re: RFR(M) 8152988: [AOT] Update test batch definitions to include aot-ed java.base module mode into hs-comp testing

2018-11-01 Thread Magnus Ihse Bursie
Do you have an updated webrev?

/Magnus

> 1 nov. 2018 kl. 22:28 skrev Ekaterina Pavlova :
> 
> Magnus,
> 
> thanks for the review.
> I fixed everything you pointed to.
> 
> thanks,
> -katya
> 
>> On 11/1/18 6:46 AM, Magnus Ihse Bursie wrote:
>> Hi Katya,
>> Sorry for the late response. :-(
>>> On 2018-10-29 21:24, Ekaterina Pavlova wrote:
>>> Vladimir,
>>> I added "-XX:+UseAOTStrictLoading" to "java -version", see 
>>> make/RunTests.gmk.
>>> Also added "--compile-with-assertion" to jaotc flags in case "-ea" flag was 
>>> present in vm options
>>> (otherwise AOTed library will warn that it does not have java assertions in 
>>> compiled code).
>> This code:
>> + ifneq ($$(findstring -ea, $$($1_VM_OPTIONS)), )
>> Using findstring is somewhat brittle here, since it will find substrings as 
>> well. Better to use filter which will match an exact word.
>>> 
>>> I will add -XX:+UseAOTStrictLoading and "-Xlog:aot+class+fingerprint=trace 
>>> -Xlog:aot+class+load=trace"
>>> directly to AOT testing tasks in mach5 jobs.
>>> 
>>> Magnus,
>>> I changed
>>>  "$$(addprefix -vmoption:, $$($1_AOT_OPTIONS))"
>>> to
>>> +  ifneq ($$($1_AOT_OPTIONS), )
>>> +$1_JTREG_BASIC_OPTIONS += -vmoptions:"$$($1_AOT_OPTIONS)"
>>> +  endif
>>> 
>>> Please review updated webrev:
>>> http://cr.openjdk.java.net/~epavlova//8152988/webrev.01/
>> Some fallout from my and Erik's discussion still needs implementing in 
>> RunTests.gmk:
>> $$(addprefix --compile-commands , $$($1_AOT_CCLIST)) \
>> Please rewrite as:
>> $$(addprefix --compile-commands$(SPACE), $$($1_AOT_CCLIST)) \
>> I'm still not happy with the always printed "java -version". :( Vladimir 
>> agreed that it would be reasonable to redirect this into a file. This can be 
>> done by e.g.
>> + $$(call ExecuteWithLog, $$@.check, \
>> + $$(FIXPATH) $$(JDK_IMAGE_DIR)/bin/java \
>> + $$($1_VM_OPTIONS) -XX:+PrintAOT -XX:+UseAOTStrictLoading 
>> -XX:AOTLibrary=$$@ -version \
>> + > $$@.verify-aot
>> + )
>> This will redirect the output to $($1_AOT_LIB).verify-aot in 
>> test-support/aot.
>> I still would like to see the if statement around SetupAot moved out, like 
>> this:
>> + ifneq ($$(GTEST_AOT_MODULES), ) + $$(eval $$(call SetupAot, $1, \
>> + MODULES := $$(GTEST_AOT_MODULES), \
>> + VM_OPTIONS := $$(GTEST_VM_OPTIONS) $$(GTEST_JAVA_OPTIONS), \
>> + )) + endif
>> and similarly for JTREG. This is coupled with removing the "ifneq 
>> ($$($1_MODULES), )" in SetupAotBody (and do not forget to un-indent two 
>> spaces).
>> /Magnus
>>> 
>>> thanks,
>>> -katya
>>> 
>>> 
 On 10/23/18 4:40 PM, Vladimir Kozlov wrote:
> On 10/22/18 9:36 AM, Ekaterina Pavlova wrote:
> Vladimir,
> 
> thanks a lot for the review.
> "-XX:+PrintAOT" was only passed to "java -version" which is done before 
> the testing to verify that AOTed libraries work.
 
 If it is only for -version then using  -XX:+PrintAOT is okay.
 
> It also perhaps could be useful debug info in case some tests starts fail.
> But I can change -XX:+PrintAOT to -XX:+UseAOTStrictLoading if you think 
> it is more than enough.
 
 -XX:+UseAOTStrictLoading is more important for tests which have different 
 flags specified and may invalidate AOTLibrary configuration. We would want 
 to catch tests which will not use AOT libraries. At least when we test 
 these your changes.
 
> 
> Do you suggest to add "-Xlog:aot+class+fingerprint=trace 
> -Xlog:aot+class+load=trace" to "java -version" only
> or to java flags used during tests execution as well?
 
 These options are next step after -XX:+UseAOTStrictLoading. We were asked 
 before how we can guarantee that AOTed methods are used by tests. One way 
 is PrintAOT which should show published AOT methods. An other way is Xlogs 
 flags which have less output but provide at least some level of 
 information that AOTed classes are used and not skipped.
 
 Thanks,
 Vladimir
 
> 
> 
> -katya
> 
>> On 10/19/18 10:40 AM, Vladimir Kozlov wrote:
>> I share concern about output of -XX:+PrintAOT - it prints all AOT 
>> classes and methods. For AOTed java.base the output will be almost the 
>> same for all tests (depends which are java.base classes are used).
>> 
>> I would suggest instead to use -XX:+UseAOTStrictLoading which exit(1) VM 
>> if it can't load AOT library for some reason (mismatched configuration - 
>> different GCs).
>> 
>> Also -Xlog:aot+class+fingerprint=trace  to trace cases of mismatching 
>> class fingerprint.
>> Also -Xlog:aot+class+load=trace -to trace AOT class loading or skipping 
>> them if they are not matching.
>> 
>> And I agree to redirect this into file.
>> 
>> Thanks,
>> Vladimir
>> 
>>> On 10/19/18 9:04 AM, Erik Joelsson wrote:
>>> Hello,
>>> 
>>> I will answer the parts I'm responsible for.
>>> 
 On 2018-10-19 00:21, Magnus Ihse Bursie 

Re: RFR(M) 8152988: [AOT] Update test batch definitions to include aot-ed java.base module mode into hs-comp testing

2018-11-01 Thread Ekaterina Pavlova

Magnus,

thanks for the review.
I fixed everything you pointed to.

thanks,
-katya

On 11/1/18 6:46 AM, Magnus Ihse Bursie wrote:

Hi Katya,

Sorry for the late response. :-(

On 2018-10-29 21:24, Ekaterina Pavlova wrote:

Vladimir,
I added "-XX:+UseAOTStrictLoading" to "java -version", see make/RunTests.gmk.
Also added "--compile-with-assertion" to jaotc flags in case "-ea" flag was 
present in vm options
(otherwise AOTed library will warn that it does not have java assertions in 
compiled code).

This code:

+ ifneq ($$(findstring -ea, $$($1_VM_OPTIONS)), )

Using findstring is somewhat brittle here, since it will find substrings as 
well. Better to use filter which will match an exact word.



I will add -XX:+UseAOTStrictLoading and "-Xlog:aot+class+fingerprint=trace 
-Xlog:aot+class+load=trace"
directly to AOT testing tasks in mach5 jobs.

Magnus,
I changed
 "$$(addprefix -vmoption:, $$($1_AOT_OPTIONS))"
to
+  ifneq ($$($1_AOT_OPTIONS), )
+    $1_JTREG_BASIC_OPTIONS += -vmoptions:"$$($1_AOT_OPTIONS)"
+  endif

Please review updated webrev:
http://cr.openjdk.java.net/~epavlova//8152988/webrev.01/


Some fallout from my and Erik's discussion still needs implementing in 
RunTests.gmk:

$$(addprefix --compile-commands , $$($1_AOT_CCLIST)) \

Please rewrite as:

$$(addprefix --compile-commands$(SPACE), $$($1_AOT_CCLIST)) \

I'm still not happy with the always printed "java -version". :( Vladimir agreed 
that it would be reasonable to redirect this into a file. This can be done by e.g.

+ $$(call ExecuteWithLog, $$@.check, \
+ $$(FIXPATH) $$(JDK_IMAGE_DIR)/bin/java \
+ $$($1_VM_OPTIONS) -XX:+PrintAOT -XX:+UseAOTStrictLoading -XX:AOTLibrary=$$@ 
-version \
+ > $$@.verify-aot
+ )

This will redirect the output to $($1_AOT_LIB).verify-aot in test-support/aot.

I still would like to see the if statement around SetupAot moved out, like this:

+ ifneq ($$(GTEST_AOT_MODULES), ) + $$(eval $$(call SetupAot, $1, \
+ MODULES := $$(GTEST_AOT_MODULES), \
+ VM_OPTIONS := $$(GTEST_VM_OPTIONS) $$(GTEST_JAVA_OPTIONS), \
+ )) + endif


and similarly for JTREG. This is coupled with removing the "ifneq ($$($1_MODULES), 
)" in SetupAotBody (and do not forget to un-indent two spaces).

/Magnus






thanks,
-katya


On 10/23/18 4:40 PM, Vladimir Kozlov wrote:

On 10/22/18 9:36 AM, Ekaterina Pavlova wrote:

Vladimir,

thanks a lot for the review.
"-XX:+PrintAOT" was only passed to "java -version" which is done before the 
testing to verify that AOTed libraries work.


If it is only for -version then using  -XX:+PrintAOT is okay.


It also perhaps could be useful debug info in case some tests starts fail.
But I can change -XX:+PrintAOT to -XX:+UseAOTStrictLoading if you think it is 
more than enough.


-XX:+UseAOTStrictLoading is more important for tests which have different flags 
specified and may invalidate AOTLibrary configuration. We would want to catch 
tests which will not use AOT libraries. At least when we test these your 
changes.



Do you suggest to add "-Xlog:aot+class+fingerprint=trace -Xlog:aot+class+load=trace" to 
"java -version" only
or to java flags used during tests execution as well?


These options are next step after -XX:+UseAOTStrictLoading. We were asked 
before how we can guarantee that AOTed methods are used by tests. One way is 
PrintAOT which should show published AOT methods. An other way is Xlogs flags 
which have less output but provide at least some level of information that 
AOTed classes are used and not skipped.

Thanks,
Vladimir




-katya

On 10/19/18 10:40 AM, Vladimir Kozlov wrote:

I share concern about output of -XX:+PrintAOT - it prints all AOT classes and 
methods. For AOTed java.base the output will be almost the same for all tests 
(depends which are java.base classes are used).

I would suggest instead to use -XX:+UseAOTStrictLoading which exit(1) VM if it 
can't load AOT library for some reason (mismatched configuration - different 
GCs).

Also -Xlog:aot+class+fingerprint=trace  to trace cases of mismatching class 
fingerprint.
Also -Xlog:aot+class+load=trace -to trace AOT class loading or skipping them if 
they are not matching.

And I agree to redirect this into file.

Thanks,
Vladimir

On 10/19/18 9:04 AM, Erik Joelsson wrote:

Hello,

I will answer the parts I'm responsible for.

On 2018-10-19 00:21, Magnus Ihse Bursie wrote:


In RunTests.gmk, the following line is repeated:
# Note, this could not be done during JDK build time.

In the options:
$1_JAOTC_OPTS := \
-J-Xmx4g --info \
-Xmx4g is huge, much larger than what we typically use. Is this intentional? 
This will risk crashing at machines with too little free memory. Will AOT fail 
to work with less memory?


There's an extra space before the comma here:
$$(addprefix --compile-commands , $$($1_AOT_CCLIST)) \
Question (to Erik, I presume): the idea of addprefix here is that AOT_CCLIST can be empty, and 
this was a convenient way to just add "--compile-commands " if 
it exists? I was a bit confounded at first since normally 

RFR(S): remove ExecuteInternalVMTests and VerboseInternalVMTests flags

2018-11-01 Thread Igor Ignatyev
http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html
> 174 lines changed: 0 ins; 170 del; 4 mod; 

Hi all,

could you please review this small clean up which removes 
ExecuteInternalVMTests and VerboseInternalVMTests flags and related make 
targets and tests?

8177708[1-3] is to convert the last of internal vm tests, so the whole 
InternalVMTests can be removed.

JBS: https://bugs.openjdk.java.net/browse/JDK-8213058
webrev: http://cr.openjdk.java.net/~iignatyev//8213058/webrev.00/index.html

testing: tier1, build all regular platforms
[1] https://bugs.openjdk.java.net/browse/JDK-8177708
[2] http://cr.openjdk.java.net/~iignatyev//8177708/webrev.00/index.html
[3] 
http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2018-October/030633.html

Thanks,
-- Igor

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie


> 1 nov. 2018 kl. 17:49 skrev Erik Joelsson :
> 
>> On 2018-11-01 08:17, Magnus Ihse Bursie wrote:
>>> On 2018-11-01 15:53, Ioi Lam wrote:
>>> Just a stupid question. Does GCC have actual support for PCH? I know 
>>> windows can load pre-compiled information from a special binary file. Does 
>>> GCC support that kind of functionality?
>> Yes.
>> 
>> https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html
> But the Visual Studio compiler seems to be able to gain much more build 
> performance from it. I don't have fresh numbers but I definitely remember a 
> non PCH build on Windows taking more than double the time, if not triple or 
> quadruple.

Could that be due to a lower starting point? I mean, if the windows compilation 
takes more time in the base case, it's easier to improve times with PCH. 

/Magnus

> 
> /Erik
>> /Magnus
>> 
>>> 
>>> Thanks
>>> Ioi
>>> 
 On Nov 1, 2018, at 5:09 AM, Magnus Ihse Bursie 
  wrote:
 
 
 
> On 2018-11-01 12:51, Thomas Stüfe wrote:
> On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
>  wrote:
>> On 2018-11-01 11:54, Aleksey Shipilev wrote:
 On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
 But then again, it might just signal that the list of headers included 
 in the PCH is no longer
 optimal. If it used to be the case that ~100 header files were so 
 interlinked, that changing any of
 them caused recompilation of all files that included it and all the 
 other 100 header files on the
 PCH list, then there was a net gain for using PCH and no "punishment".
 
 But nowadays this list might be far too large. Perhaps there's just 
 only a core set of say 20 header
 files that are universally (or almost universally) included, and 
 that's all that should be in the
 PCH list then. My guess is that, with a proper selection of header 
 files, PCH will still be a benefit.
>>> I agree. This smells like inefficient PCH list. We can improve that, 
>>> but I think that would be a
>>> lower priority, given the abundance of CPU power we use to compile 
>>> Hotspot. In my mind, the decisive
>>> factor for disabling PCH is to keep proper includes at all times, 
>>> without masking it with PCH. Half
>>> of the trivial bugs I submit against hotspot are #include differences 
>>> that show up in CI that builds
>>> without PCH.
>>> 
>>> So this is my ideal world:
>>>a) Efficient PCH list enabled by default for development pleasure;
>>>b) CIs build without PCH all the time (jdk-submit tier1 included!);
>>> 
>>> Since we don't yet have (a), and (b) seems to be tedious, regardless 
>>> how many times both Red Hat and
>>> SAP people ask for it, disabling PCH by default feels like a good 
>>> fallback.
>> Should just CI builds default to non-PCH, or all builds (that is, should
>> "configure" default to non-PCH on linux)? Maybe the former is better --
>> one thing that the test numbers here has not shown is if incremental
>> recompiles are improved by PCH. My gut feeling is that they really
>> should -- once you've created your PCH, subsequent recompiles will be
>> faster.
> That would only be true as long as you just change cpp files, no? As
> soon as you touch a header which is included in precompiled.hpp you
> are worse off than without pch.
> 
>> So the developer default should perhaps be to keep PCH, and we
>> should only configure the CI builds to do without PCH.
> CI without pch would be better than nothing. But seeing how clunky and
> slow jdk-submit is (and how often there are problems), I rather fail
> early in my own build than waiting for jdk-submit to tell me something
> went wrong (well, that is why I usually build nonpch, like Ioi does).
> 
> Just my 5 cent.
 I hear you, loud and clear. :) I've created 
 https://bugs.openjdk.java.net/browse/JDK-8213241 to disable PCH by 
 default, for all builds, on gcc. (I'm interpreting "linux" in this case as 
 "gcc", since this is compiler-dependent, and not OS dependent).
 
 /Magnus
 
> ..Thomas
>> /Magnus
>> 
>> 
>>> -Aleksey
> 



Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Erik Joelsson

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem to 
lose just as much real compile time. The big difference in these tests 
was rather the number of cpus in the machine (32 threads in the linux 
box vs 8 on the mac). The total amount of work done was increased when 
PCH was disabled, that's the user time. Here is my theory on why the 
real (wall clock) time was not consistent with user time between these 
experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the time 
saved in 2. The potential time saved in 2 goes down as the number of 
cpus go up. I'm pretty sure that if I repeated the experiment on Linux 
on a smaller box (typically one we use in CI), the results would look 
similar to Macosx, and similarly, if I had access to a much bigger mac, 
it would behave like the big Linux box. This is why I'm saying this 
should be done for both or none of these platforms.


In addition to this, the experiment only built hotspot. If you we would 
instead build the whole JDK, then the time wasted in 1 in the PCH case 
would be negated to a large extent by other build targets running 
concurrently, so for a full build, PCH is still providing value.


The question here is that if the value of PCH isn't very big, perhaps 
it's not worth it if it's also creating as much grief as described here. 
There is no doubt that there is value however. And given the examination 
done by Magnus, it seems this value could be increased.


The main reason why we haven't disabled PCH in CI before this. We really 
really want to get CI builds fast. We don't have a ton of over capacity 
to just throw at it. PCH made builds faster, so we used them. My other 
reason is consistency between builds. Supporting multiple different 
modes of building creates the potential for inconsistencies. For that 
reason I would definitely not support having PCH on by default, but 
turned off in our CI/dev-submit. We pick one or the other as the 
official build configuration, and we stick with the official build 
configuration for all builds of any official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one after 
the other. The jdk-submit currently only runs tier1. In tier2 I've put 
slowdebug builds with PCH disabled, just to help verify a common 
developer configuration. These builds are not meant to be used for 
testing or anything like that, they are just run for verification, which 
is why this is ok. We could argue that it would make sense to move the 
linux-x64-slowdebug without pch build to tier1 so that it's included in 
dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 2013), 
3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the -npch is 
without precompiled headers. Here we see a slight degradation when 
disabling on both user time and wall clock time. My guess is that the 
user time increase is about the same, but because of a lower cpu 
count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that the 
precompiled header is generated alone on one core while nothing else 
is happening, which would explain this degradation in build speed. If 
we were instead building the whole product, we would see a better 
correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did with 
ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That the 
gain is small is no argument to lose it. (I remember a time when you 
were hunting seconds in the build time ;-))


On linux, the story seems different, though. People experience PCH as 
a problem, and there is a net loss of time, at least on selected 
testing machines. It makes sense to turn it off as default, then.


/Magnus



/Erik

macosx-x64
real     4m13.658s
user     27m17.595s
sys     2m11.306s

macosx-x64-npch
real     4m27.823s
user     30m0.434s
sys     2m18.669s

macosx-x64-debug
real     5m21.032s
user     35m57.347s
sys     2m20.588s

macosx-x64-debug-npch
real     5m33.728s
user     38m10.311s
sys     2m27.587s

macosx-x64-slowdebug
real     3m54.439s
user     25m32.197s
sys     2m8.750s

macosx-x64-slowdebug-npch
real     4m11.987s
user     27m59.857s
sys     2m18.093s


On 2018-10-30 14:00, Erik Joelsson wrote:

Hello,

On 2018-10-30 

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Erik Joelsson

On 2018-11-01 08:17, Magnus Ihse Bursie wrote:

On 2018-11-01 15:53, Ioi Lam wrote:
Just a stupid question. Does GCC have actual support for PCH? I know 
windows can load pre-compiled information from a special binary file. 
Does GCC support that kind of functionality?

Yes.

https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html

But the Visual Studio compiler seems to be able to gain much more build 
performance from it. I don't have fresh numbers but I definitely 
remember a non PCH build on Windows taking more than double the time, if 
not triple or quadruple.


/Erik

/Magnus



Thanks
Ioi

On Nov 1, 2018, at 5:09 AM, Magnus Ihse Bursie 
 wrote:





On 2018-11-01 12:51, Thomas Stüfe wrote:
On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
 wrote:

On 2018-11-01 11:54, Aleksey Shipilev wrote:

On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
But then again, it might just signal that the list of headers 
included in the PCH is no longer
optimal. If it used to be the case that ~100 header files were 
so interlinked, that changing any of
them caused recompilation of all files that included it and all 
the other 100 header files on the
PCH list, then there was a net gain for using PCH and no 
"punishment".


But nowadays this list might be far too large. Perhaps there's 
just only a core set of say 20 header
files that are universally (or almost universally) included, and 
that's all that should be in the
PCH list then. My guess is that, with a proper selection of 
header files, PCH will still be a benefit.
I agree. This smells like inefficient PCH list. We can improve 
that, but I think that would be a
lower priority, given the abundance of CPU power we use to 
compile Hotspot. In my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, 
without masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include 
differences that show up in CI that builds

without PCH.

So this is my ideal world:
   a) Efficient PCH list enabled by default for development 
pleasure;
   b) CIs build without PCH all the time (jdk-submit tier1 
included!);


Since we don't yet have (a), and (b) seems to be tedious, 
regardless how many times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good 
fallback.
Should just CI builds default to non-PCH, or all builds (that is, 
should
"configure" default to non-PCH on linux)? Maybe the former is 
better --

one thing that the test numbers here has not shown is if incremental
recompiles are improved by PCH. My gut feeling is that they really
should -- once you've created your PCH, subsequent recompiles will be
faster.

That would only be true as long as you just change cpp files, no? As
soon as you touch a header which is included in precompiled.hpp you
are worse off than without pch.


So the developer default should perhaps be to keep PCH, and we
should only configure the CI builds to do without PCH.

CI without pch would be better than nothing. But seeing how clunky and
slow jdk-submit is (and how often there are problems), I rather fail
early in my own build than waiting for jdk-submit to tell me something
went wrong (well, that is why I usually build nonpch, like Ioi does).

Just my 5 cent.
I hear you, loud and clear. :) I've created 
https://bugs.openjdk.java.net/browse/JDK-8213241 to disable PCH by 
default, for all builds, on gcc. (I'm interpreting "linux" in this 
case as "gcc", since this is compiler-dependent, and not OS dependent).


/Magnus


..Thomas

/Magnus



-Aleksey







Re: RFR: JDK-8213237 Remove test-compile-commands from jib-profiles.js

2018-11-01 Thread Erik Joelsson

Looks good.

/Erik


On 2018-11-01 03:06, Magnus Ihse Bursie wrote:
In JDK-8210958, a broken version of jib-profiles.js was unfortunately 
checked in.


The change at 
http://cr.openjdk.java.net/~ihse/JDK-8210958-rename-run-test-to-test/webrev.02/make/conf/jib-profiles.js.udiff.html, 
as reviewed, was unfortunately reverted in the process.


Without this fix, the *-testmake profiles fails when run.

Bug: https://bugs.openjdk.java.net/browse/JDK-8213237
Patch inline:
diff --git a/make/conf/jib-profiles.js b/make/conf/jib-profiles.js
--- a/make/conf/jib-profiles.js
+++ b/make/conf/jib-profiles.js
@@ -485,7 +485,7 @@
 .forEach(function (name) {
 var maketestName = name + "-testmake";
 profiles[maketestName] = concatObjects(profiles[name], 
testmakeBase);
-    profiles[maketestName].default_make_targets = [ 
"test-make", "test-compile-commands" ];
+    profiles[maketestName].default_make_targets = [ 
"test-make" ];

 });

 // Profiles for building the zero jvm variant. These are used for 
verification.


/Magnus
(Sorry for the mess :-()




Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie

On 2018-11-01 15:53, Ioi Lam wrote:

Just a stupid question. Does GCC have actual support for PCH? I know windows 
can load pre-compiled information from a special binary file. Does GCC support 
that kind of functionality?

Yes.

https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html

/Magnus



Thanks
Ioi


On Nov 1, 2018, at 5:09 AM, Magnus Ihse Bursie  
wrote:




On 2018-11-01 12:51, Thomas Stüfe wrote:
On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
 wrote:

On 2018-11-01 11:54, Aleksey Shipilev wrote:

On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
But then again, it might just signal that the list of headers included in the 
PCH is no longer
optimal. If it used to be the case that ~100 header files were so interlinked, 
that changing any of
them caused recompilation of all files that included it and all the other 100 
header files on the
PCH list, then there was a net gain for using PCH and no "punishment".

But nowadays this list might be far too large. Perhaps there's just only a core 
set of say 20 header
files that are universally (or almost universally) included, and that's all 
that should be in the
PCH list then. My guess is that, with a proper selection of header files, PCH 
will still be a benefit.

I agree. This smells like inefficient PCH list. We can improve that, but I 
think that would be a
lower priority, given the abundance of CPU power we use to compile Hotspot. In 
my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, without 
masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include differences that show 
up in CI that builds
without PCH.

So this is my ideal world:
   a) Efficient PCH list enabled by default for development pleasure;
   b) CIs build without PCH all the time (jdk-submit tier1 included!);

Since we don't yet have (a), and (b) seems to be tedious, regardless how many 
times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good fallback.

Should just CI builds default to non-PCH, or all builds (that is, should
"configure" default to non-PCH on linux)? Maybe the former is better --
one thing that the test numbers here has not shown is if incremental
recompiles are improved by PCH. My gut feeling is that they really
should -- once you've created your PCH, subsequent recompiles will be
faster.

That would only be true as long as you just change cpp files, no? As
soon as you touch a header which is included in precompiled.hpp you
are worse off than without pch.


So the developer default should perhaps be to keep PCH, and we
should only configure the CI builds to do without PCH.

CI without pch would be better than nothing. But seeing how clunky and
slow jdk-submit is (and how often there are problems), I rather fail
early in my own build than waiting for jdk-submit to tell me something
went wrong (well, that is why I usually build nonpch, like Ioi does).

Just my 5 cent.

I hear you, loud and clear. :) I've created https://bugs.openjdk.java.net/browse/JDK-8213241 to 
disable PCH by default, for all builds, on gcc. (I'm interpreting "linux" in this case as 
"gcc", since this is compiler-dependent, and not OS dependent).

/Magnus


..Thomas

/Magnus



-Aleksey





Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Ioi Lam
Just a stupid question. Does GCC have actual support for PCH? I know windows 
can load pre-compiled information from a special binary file. Does GCC support 
that kind of functionality?

Thanks
Ioi

> On Nov 1, 2018, at 5:09 AM, Magnus Ihse Bursie 
>  wrote:
> 
> 
> 
>> On 2018-11-01 12:51, Thomas Stüfe wrote:
>> On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
>>  wrote:
>>> On 2018-11-01 11:54, Aleksey Shipilev wrote:
> On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
> But then again, it might just signal that the list of headers included in 
> the PCH is no longer
> optimal. If it used to be the case that ~100 header files were so 
> interlinked, that changing any of
> them caused recompilation of all files that included it and all the other 
> 100 header files on the
> PCH list, then there was a net gain for using PCH and no "punishment".
> 
> But nowadays this list might be far too large. Perhaps there's just only 
> a core set of say 20 header
> files that are universally (or almost universally) included, and that's 
> all that should be in the
> PCH list then. My guess is that, with a proper selection of header files, 
> PCH will still be a benefit.
 I agree. This smells like inefficient PCH list. We can improve that, but I 
 think that would be a
 lower priority, given the abundance of CPU power we use to compile 
 Hotspot. In my mind, the decisive
 factor for disabling PCH is to keep proper includes at all times, without 
 masking it with PCH. Half
 of the trivial bugs I submit against hotspot are #include differences that 
 show up in CI that builds
 without PCH.
 
 So this is my ideal world:
   a) Efficient PCH list enabled by default for development pleasure;
   b) CIs build without PCH all the time (jdk-submit tier1 included!);
 
 Since we don't yet have (a), and (b) seems to be tedious, regardless how 
 many times both Red Hat and
 SAP people ask for it, disabling PCH by default feels like a good fallback.
>>> Should just CI builds default to non-PCH, or all builds (that is, should
>>> "configure" default to non-PCH on linux)? Maybe the former is better --
>>> one thing that the test numbers here has not shown is if incremental
>>> recompiles are improved by PCH. My gut feeling is that they really
>>> should -- once you've created your PCH, subsequent recompiles will be
>>> faster.
>> That would only be true as long as you just change cpp files, no? As
>> soon as you touch a header which is included in precompiled.hpp you
>> are worse off than without pch.
>> 
>>> So the developer default should perhaps be to keep PCH, and we
>>> should only configure the CI builds to do without PCH.
>> CI without pch would be better than nothing. But seeing how clunky and
>> slow jdk-submit is (and how often there are problems), I rather fail
>> early in my own build than waiting for jdk-submit to tell me something
>> went wrong (well, that is why I usually build nonpch, like Ioi does).
>> 
>> Just my 5 cent.
> I hear you, loud and clear. :) I've created 
> https://bugs.openjdk.java.net/browse/JDK-8213241 to disable PCH by default, 
> for all builds, on gcc. (I'm interpreting "linux" in this case as "gcc", 
> since this is compiler-dependent, and not OS dependent).
> 
> /Magnus
> 
>> 
>> ..Thomas
>>> /Magnus
>>> 
>>> 
 -Aleksey
 
> 



Re: RFR(M) 8152988: [AOT] Update test batch definitions to include aot-ed java.base module mode into hs-comp testing

2018-11-01 Thread Magnus Ihse Bursie

Hi Katya,

Sorry for the late response. :-(

On 2018-10-29 21:24, Ekaterina Pavlova wrote:

Vladimir,
I added "-XX:+UseAOTStrictLoading" to "java -version", see 
make/RunTests.gmk.
Also added "--compile-with-assertion" to jaotc flags in case "-ea" 
flag was present in vm options
(otherwise AOTed library will warn that it does not have java 
assertions in compiled code).

This code:

+ ifneq ($$(findstring -ea, $$($1_VM_OPTIONS)), )

Using findstring is somewhat brittle here, since it will find substrings 
as well. Better to use filter which will match an exact word.




I will add -XX:+UseAOTStrictLoading and 
"-Xlog:aot+class+fingerprint=trace -Xlog:aot+class+load=trace"

directly to AOT testing tasks in mach5 jobs.

Magnus,
I changed
 "$$(addprefix -vmoption:, $$($1_AOT_OPTIONS))"
to
+  ifneq ($$($1_AOT_OPTIONS), )
+    $1_JTREG_BASIC_OPTIONS += -vmoptions:"$$($1_AOT_OPTIONS)"
+  endif

Please review updated webrev:
 http://cr.openjdk.java.net/~epavlova//8152988/webrev.01/


Some fallout from my and Erik's discussion still needs implementing in 
RunTests.gmk:


$$(addprefix --compile-commands , $$($1_AOT_CCLIST)) \

Please rewrite as:

$$(addprefix --compile-commands$(SPACE), $$($1_AOT_CCLIST)) \

I'm still not happy with the always printed "java -version". :( Vladimir 
agreed that it would be reasonable to redirect this into a file. This 
can be done by e.g.


+ $$(call ExecuteWithLog, $$@.check, \
+ $$(FIXPATH) $$(JDK_IMAGE_DIR)/bin/java \
+ $$($1_VM_OPTIONS) -XX:+PrintAOT -XX:+UseAOTStrictLoading 
-XX:AOTLibrary=$$@ -version \

+ > $$@.verify-aot
+ )

This will redirect the output to $($1_AOT_LIB).verify-aot in 
test-support/aot.


I still would like to see the if statement around SetupAot moved out, 
like this:


+ ifneq ($$(GTEST_AOT_MODULES), ) + $$(eval $$(call SetupAot, $1, \
+ MODULES := $$(GTEST_AOT_MODULES), \
+ VM_OPTIONS := $$(GTEST_VM_OPTIONS) $$(GTEST_JAVA_OPTIONS), \
+ )) + endif


and similarly for JTREG. This is coupled with removing the "ifneq 
($$($1_MODULES), )" in SetupAotBody (and do not forget to un-indent two 
spaces).


/Magnus






thanks,
-katya


On 10/23/18 4:40 PM, Vladimir Kozlov wrote:

On 10/22/18 9:36 AM, Ekaterina Pavlova wrote:

Vladimir,

thanks a lot for the review.
"-XX:+PrintAOT" was only passed to "java -version" which is done 
before the testing to verify that AOTed libraries work.


If it is only for -version then using  -XX:+PrintAOT is okay.

It also perhaps could be useful debug info in case some tests starts 
fail.
But I can change -XX:+PrintAOT to -XX:+UseAOTStrictLoading if you 
think it is more than enough.


-XX:+UseAOTStrictLoading is more important for tests which have 
different flags specified and may invalidate AOTLibrary 
configuration. We would want to catch tests which will not use AOT 
libraries. At least when we test these your changes.




Do you suggest to add "-Xlog:aot+class+fingerprint=trace 
-Xlog:aot+class+load=trace" to "java -version" only

or to java flags used during tests execution as well?


These options are next step after -XX:+UseAOTStrictLoading. We were 
asked before how we can guarantee that AOTed methods are used by 
tests. One way is PrintAOT which should show published AOT methods. 
An other way is Xlogs flags which have less output but provide at 
least some level of information that AOTed classes are used and not 
skipped.


Thanks,
Vladimir




-katya

On 10/19/18 10:40 AM, Vladimir Kozlov wrote:
I share concern about output of -XX:+PrintAOT - it prints all AOT 
classes and methods. For AOTed java.base the output will be almost 
the same for all tests (depends which are java.base classes are used).


I would suggest instead to use -XX:+UseAOTStrictLoading which 
exit(1) VM if it can't load AOT library for some reason (mismatched 
configuration - different GCs).


Also -Xlog:aot+class+fingerprint=trace  to trace cases of 
mismatching class fingerprint.
Also -Xlog:aot+class+load=trace -to trace AOT class loading or 
skipping them if they are not matching.


And I agree to redirect this into file.

Thanks,
Vladimir

On 10/19/18 9:04 AM, Erik Joelsson wrote:

Hello,

I will answer the parts I'm responsible for.

On 2018-10-19 00:21, Magnus Ihse Bursie wrote:


In RunTests.gmk, the following line is repeated:
# Note, this could not be done during JDK build time.

In the options:
$1_JAOTC_OPTS := \
-J-Xmx4g --info \
-Xmx4g is huge, much larger than what we typically use. Is this 
intentional? This will risk crashing at machines with too little 
free memory. Will AOT fail to work with less memory?



There's an extra space before the comma here:
$$(addprefix --compile-commands , $$($1_AOT_CCLIST)) \
Question (to Erik, I presume): the idea of addprefix here is that 
AOT_CCLIST can be empty, and this was a convenient way to just 
add "--compile-commands " if it exists? I was a 
bit confounded at first since normally we only use it for 
distributing a prefix over multiple items, and I could see no way 
for AOT_CCLIST to ever 

Re: RFR: JDK-8213227: Update jib src excludes to filter webrev and Jreg directories

2018-11-01 Thread Magnus Ihse Bursie

On 2018-11-01 01:22, Erik Joelsson wrote:
This patch adds some useful excludes for jib source bundle creation, 
which will help avoid sending in too much junk in Mach 5 jobs.


The strings are rewritten as arrays, which are now supported, as well 
as using ** globs to match arbitrary directory depths. I've also 
removed the no longer used .build dir from excludes.


http://cr.openjdk.java.net/~erikj/8213227/webrev.01/

Looks good to me.

/Magnus



Bug: https://bugs.openjdk.java.net/browse/JDK-8213227

/Erik





Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Martin Buchholz
I vote for disabling precompiled headers by default - they simply make the
build less reliable.

It seemed like precompiled headers did not work when using different
optimization levels for different source files, which in turn was needed
for building with clang, so I've been disabling precompiled headers for
years in my own build script.  Here's a snippet:

# Disable optimization for selected source files.
#
# Needed to have different optimization levels for different files?
addConfigureFlag --disable-precompiled-headers
# We really need NONE; LOW is not low enough!
# Fixed in jdk10: JDK-8186787 clang-4.0 SIGSEGV in Unsafe_PutByte
((major >= 10)) \
  || makeFlags+=(BUILD_LIBJVM_unsafe.cpp_OPTIMIZATION=NONE)
if [[ "${DEBUG_LEVEL}" != "release" ]]; then
  # https://bugs.openjdk.java.net/browse/JDK-8186780
  makeFlags+=(BUILD_LIBJVM_os_linux_x86.cpp_OPTIMIZATION=NONE)
fi


On Thu, Nov 1, 2018 at 5:09 AM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

>
>
> On 2018-11-01 12:51, Thomas Stüfe wrote:
>
>> On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
>>  wrote:
>>
>>> On 2018-11-01 11:54, Aleksey Shipilev wrote:
>>>
 On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:

> But then again, it might just signal that the list of headers included
> in the PCH is no longer
> optimal. If it used to be the case that ~100 header files were so
> interlinked, that changing any of
> them caused recompilation of all files that included it and all the
> other 100 header files on the
> PCH list, then there was a net gain for using PCH and no "punishment".
>
> But nowadays this list might be far too large. Perhaps there's just
> only a core set of say 20 header
> files that are universally (or almost universally) included, and
> that's all that should be in the
> PCH list then. My guess is that, with a proper selection of header
> files, PCH will still be a benefit.
>
 I agree. This smells like inefficient PCH list. We can improve that,
 but I think that would be a
 lower priority, given the abundance of CPU power we use to compile
 Hotspot. In my mind, the decisive
 factor for disabling PCH is to keep proper includes at all times,
 without masking it with PCH. Half
 of the trivial bugs I submit against hotspot are #include differences
 that show up in CI that builds
 without PCH.

 So this is my ideal world:
a) Efficient PCH list enabled by default for development pleasure;
b) CIs build without PCH all the time (jdk-submit tier1 included!);

 Since we don't yet have (a), and (b) seems to be tedious, regardless
 how many times both Red Hat and
 SAP people ask for it, disabling PCH by default feels like a good
 fallback.

>>> Should just CI builds default to non-PCH, or all builds (that is, should
>>> "configure" default to non-PCH on linux)? Maybe the former is better --
>>> one thing that the test numbers here has not shown is if incremental
>>> recompiles are improved by PCH. My gut feeling is that they really
>>> should -- once you've created your PCH, subsequent recompiles will be
>>> faster.
>>>
>> That would only be true as long as you just change cpp files, no? As
>> soon as you touch a header which is included in precompiled.hpp you
>> are worse off than without pch.
>>
>> So the developer default should perhaps be to keep PCH, and we
>>> should only configure the CI builds to do without PCH.
>>>
>> CI without pch would be better than nothing. But seeing how clunky and
>> slow jdk-submit is (and how often there are problems), I rather fail
>> early in my own build than waiting for jdk-submit to tell me something
>> went wrong (well, that is why I usually build nonpch, like Ioi does).
>>
>> Just my 5 cent.
>>
> I hear you, loud and clear. :) I've created https://bugs.openjdk.java.net/
> browse/JDK-8213241 to disable PCH by default, for all builds, on gcc.
> (I'm interpreting "linux" in this case as "gcc", since this is
> compiler-dependent, and not OS dependent).
>
> /Magnus
>
>
>> ..Thomas
>>
>>> /Magnus
>>>
>>>
>>> -Aleksey


>


Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie




On 2018-11-01 12:51, Thomas Stüfe wrote:

On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
 wrote:

On 2018-11-01 11:54, Aleksey Shipilev wrote:

On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:

But then again, it might just signal that the list of headers included in the 
PCH is no longer
optimal. If it used to be the case that ~100 header files were so interlinked, 
that changing any of
them caused recompilation of all files that included it and all the other 100 
header files on the
PCH list, then there was a net gain for using PCH and no "punishment".

But nowadays this list might be far too large. Perhaps there's just only a core 
set of say 20 header
files that are universally (or almost universally) included, and that's all 
that should be in the
PCH list then. My guess is that, with a proper selection of header files, PCH 
will still be a benefit.

I agree. This smells like inefficient PCH list. We can improve that, but I 
think that would be a
lower priority, given the abundance of CPU power we use to compile Hotspot. In 
my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, without 
masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include differences that show 
up in CI that builds
without PCH.

So this is my ideal world:
   a) Efficient PCH list enabled by default for development pleasure;
   b) CIs build without PCH all the time (jdk-submit tier1 included!);

Since we don't yet have (a), and (b) seems to be tedious, regardless how many 
times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good fallback.

Should just CI builds default to non-PCH, or all builds (that is, should
"configure" default to non-PCH on linux)? Maybe the former is better --
one thing that the test numbers here has not shown is if incremental
recompiles are improved by PCH. My gut feeling is that they really
should -- once you've created your PCH, subsequent recompiles will be
faster.

That would only be true as long as you just change cpp files, no? As
soon as you touch a header which is included in precompiled.hpp you
are worse off than without pch.


So the developer default should perhaps be to keep PCH, and we
should only configure the CI builds to do without PCH.

CI without pch would be better than nothing. But seeing how clunky and
slow jdk-submit is (and how often there are problems), I rather fail
early in my own build than waiting for jdk-submit to tell me something
went wrong (well, that is why I usually build nonpch, like Ioi does).

Just my 5 cent.
I hear you, loud and clear. :) I've created 
https://bugs.openjdk.java.net/browse/JDK-8213241 to disable PCH by 
default, for all builds, on gcc. (I'm interpreting "linux" in this case 
as "gcc", since this is compiler-dependent, and not OS dependent).


/Magnus



..Thomas

/Magnus



-Aleksey





Re: RFR: JDK-8212780: JEP 343: Packaging Tool Implementation

2018-11-01 Thread Magnus Ihse Bursie




On 2018-10-24 19:18, Erik Joelsson wrote:

Hello,

Nice to see this finally happening!

Are we actually adding a new demo? I thought we were working towards 
getting rid of the demos completely.


CompileJavaModules.gmk:

The jdk.packager_CLEAN_FILES could be replaced with a simple 
"jdk.packager_CLEAN := .properties" unless you actually need to add 
all the platform specific files to builds for all platforms (which I 
doubt?). To clarify, the current patch will add all linux, windows and 
mac properties to builds of all OSes. Is that intended?


Modules.gmk:

I would rather have the filter be conditional on an inclusive list of 
platforms. Other OpenJDK contributors are building for other OSes like 
AIX and we don't want to have to keep track of all those. The list of 
OSes that jpackager supports is well defined, so better check for 
those explicitly.


src/jdk.jlink/share/classes/module-info.java:

I believe the qualified export needs to be put in a 
module-info.java.extra file since we filter out the target module on 
unsupported platforms.


Launcher-jdk.packager.gmk:

* Please use $$(call FindExecutableDirForModule, $$(MODULE)) for the 
DEST dir in the copy.
* Please adjust the indentation of the big windows/else blocks. We 
indent everything with 2 spaces in conditional blocks.
* A minor style nit on the copy-and-chmod macro, since it's meant to 
be used as a recipe, we usually indent those with tabs to indicate 
that they are recipe lines, in this case, one tab is enough even 
though the surrounding define should be indented with 2 spaces (don't 
combine tab and spaces here).
* The (new) SetupJdkExecutable macro handles VERSIONINFO_RESOURCE 
automatically for you unless you have specific needs. This looks like 
the defaults should be fine.
* Since this is just for windows, the TOOLCHAIN_LINK_CXX should not 
make any difference. (It's only needed for GCC to force linking with 
g++ instead of gcc) So please remove.

* You could consider using FindSrcDirsForComponent for the src dir.

Lib-jdk.packager.gmk:

* The native source files are not organized according to the standards 
introduced with JEP 201. If they were, most of these SetupJdkLibrary 
calls would automatically find the correct sources. I realize there is 
a special case for the windows papplauncherc executable as it's built 
from the same sources as papplauncher, so that will need a special 
case. Building of the executables should be moved to 
Launcher-jdk.packager.gmk however.
* Why are you changing the build output dir and copying debuginfo 
files around? We have a standard way of building and places where 
files are put. If that is not working for you, we need to fix it on a 
global level. Having each native library being built differently is 
not good for maintainability.
* VERSIONINFO_RESOURCE is handled automatically so does not need to be 
specified.
* Please indent the full contents of logic blocks with 2 spaces. Not 
having proper indents makes it much harder to read the code.
* Several lines are too long for future side by side comparisons, 
please break them up. You can use the # lines as a soft guidance.


On top of Erik's comments, I also have the following to add from a build 
perspective:


* In make/CompileJavaModules.gmk:
Do you really need to use GENERATE_JDKBYTECODE_NOWARNINGS? When you add 
new code to OpenJDK, it really *should* be able to build without 
warnings. This has previously only been used for legacy code that has 
not yet been brought up to OpenJDK standards. Introducing new code with 
warnings silenced feels like a step in the wrong direction.


* In make/launcher/Launcher-jdk.packager.gmk:
The setup of BUILD_JPACKAGEREXE is only done for windows, but you have 
"DISABLED_WARNINGS_gcc := unused-result implicit-fallthrough". This does 
not make sense.


The CFLAGS listed look redundant. At the very least I know that -nologo 
is already present in CXXFLAGS_JDKEXE; I believe the same goes for the 
rest (or most of the rest) of the flags. Please only add flags if you 
really need them, since all special configuration/combinations is a 
potential cause for trouble in the future. This same applies to LDFLAGS.


* In src/jdk.packager/unix/scripts/jpackager:

This file  resides in a non-standard directory. We have a small list of 
directories that are supposed to come after the $MODULE/share|$OS/ part 
of the path, and "scripts" is not one of them. While there is no rule 
"forbidding" new kinds of directories, I strongly recommend against this.


Looking more closely at the file, I wonder if you really need it? It's 
sole purpose seems to be to launch java -m 
jdk.packager/jdk.packager.main.Main. For that, we have a much better 
solution. Just change make/launcher/Launcher-jdk.packager.gmk to include 
the following contents:


$(eval $(call SetupBuildLauncher, jpackager, \
    MAIN_CLASS := jdk.packager.main.Main, \
))

It will create a "jpackager" binary. Which works for Windows too, so 
maybe you won't even 

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Thomas Stüfe
On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
 wrote:
>
> On 2018-11-01 11:54, Aleksey Shipilev wrote:
> > On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
> >> But then again, it might just signal that the list of headers included in 
> >> the PCH is no longer
> >> optimal. If it used to be the case that ~100 header files were so 
> >> interlinked, that changing any of
> >> them caused recompilation of all files that included it and all the other 
> >> 100 header files on the
> >> PCH list, then there was a net gain for using PCH and no "punishment".
> >>
> >> But nowadays this list might be far too large. Perhaps there's just only a 
> >> core set of say 20 header
> >> files that are universally (or almost universally) included, and that's 
> >> all that should be in the
> >> PCH list then. My guess is that, with a proper selection of header files, 
> >> PCH will still be a benefit.
> > I agree. This smells like inefficient PCH list. We can improve that, but I 
> > think that would be a
> > lower priority, given the abundance of CPU power we use to compile Hotspot. 
> > In my mind, the decisive
> > factor for disabling PCH is to keep proper includes at all times, without 
> > masking it with PCH. Half
> > of the trivial bugs I submit against hotspot are #include differences that 
> > show up in CI that builds
> > without PCH.
> >
> > So this is my ideal world:
> >   a) Efficient PCH list enabled by default for development pleasure;
> >   b) CIs build without PCH all the time (jdk-submit tier1 included!);
> >
> > Since we don't yet have (a), and (b) seems to be tedious, regardless how 
> > many times both Red Hat and
> > SAP people ask for it, disabling PCH by default feels like a good fallback.
>
> Should just CI builds default to non-PCH, or all builds (that is, should
> "configure" default to non-PCH on linux)? Maybe the former is better --
> one thing that the test numbers here has not shown is if incremental
> recompiles are improved by PCH. My gut feeling is that they really
> should -- once you've created your PCH, subsequent recompiles will be
> faster.

That would only be true as long as you just change cpp files, no? As
soon as you touch a header which is included in precompiled.hpp you
are worse off than without pch.

> So the developer default should perhaps be to keep PCH, and we
> should only configure the CI builds to do without PCH.

CI without pch would be better than nothing. But seeing how clunky and
slow jdk-submit is (and how often there are problems), I rather fail
early in my own build than waiting for jdk-submit to tell me something
went wrong (well, that is why I usually build nonpch, like Ioi does).

Just my 5 cent.

..Thomas
>
> /Magnus
>
>
> >
> > -Aleksey
> >
>


Re: Build niceness

2018-11-01 Thread Aleksey Shipilev
On 11/01/2018 12:41 PM, Magnus Ihse Bursie wrote:
> I was about to suggest that you could achieve this by overriding NICE in the 
> configure command line,
> by e.g. configure NICE="/usr/bin/nice -n0" (I presume -n0 is a no-op), but it 
> turned out that this
> did not work! :( I opened JDK-8213239 for this, but for the moment, that's 
> not a usable workaround...

Oh! That feels much less intrusive than adding another option to the build 
system. I would chicken
out and wait for JDK-8213239 to be fixed and then override NICE. There is no 
pressing need to have
this workaround.

Thanks,
-Aleksey



Re: Build niceness

2018-11-01 Thread Magnus Ihse Bursie

On 2018-10-18 18:08, Erik Joelsson wrote:

Hello Aleksey,

We very deliberately added the automatic niceness for a better general 
user experience. This was back when build-infra was new and we started 
pushing the parallelism of the build to where the UIs of our 
development systems became annoyingly slow and sluggish when building. 
So, I would be very against removing it completely, but I certainly 
think it should be possible to opt out. We should never force such a 
behavior on an unwilling user. My only defense is that this was done 
so early so we probably didn't think about it. If you would like to 
provide a patch adding a configure arg for disabling it, it would 
certainly be welcome.


An alternative, I suspect, is to lower the value that is added to the 
niceness, e.g. using "nice -n5" or even just "nice -n1". This sounds 
like it should help with your particular case, and I still think it 
would solve our original problem, that developer's workstations should 
not be unresponsive while building.


I was about to suggest that you could achieve this by overriding NICE in 
the configure command line, by e.g. configure NICE="/usr/bin/nice -n0" 
(I presume -n0 is a no-op), but it turned out that this did not work! :( 
I opened JDK-8213239 for this, but for the moment, that's not a usable 
workaround...


/Magnus



/Erik


On 2018-10-18 01:08, Aleksey Shipilev wrote:

Hi,

Is there any specific user story behind "nice"-ing the compilation 
jobs from within the build system?


It unfortunately clashes with priority budgeting. For example, my 
build server is used by me doing
adhoc builds, automatic builds and some background tasks. The 
automatic build user has priority 10
set in /etc/security/limits.conf. The background user has priority 20 
set in limits.conf. In theory,
it sounds good: it would get all the CPU ad-hoc builds want, then 
give CPU to automatic build jobs,

then to background jobs.

But then the build nice-s the compilation jobs, which drops its 
priority down to the priority of

background jobs:

   PID USER  PR  NI    VIRT    RES    SHR S  %CPU %MEM TIME+ COMMAND


25966 backgro+  39  19  808872 460772   2440 R  97.0  1.4 4181:54 R


25968 backgro+  39  19  808872 460588   2340 R  97.0  1.4 4181:17 R


25965 backgro+  39  19  808872 460864   2680 R  93.4  1.4 4180:29 R


30998 buildbot  39  19 6688224 1.050g  2 S  64.8  3.4 4:58.52 java


27518 buildbot  39  19 2915828 110884  21740 S  47.0  0.3 0:02.57 javac


26802 buildbot  39  19  290900 264792  17392 R  38.2  0.8 0:05.48 
cc1plus



27486 buildbot  39  19  200296 171792  18440 R  30.6  0.5 0:01.96 
cc1plus


...which wrecks up this story. Maybe the better solution in build 
system is to avoid nice-ing at
all, and require users who need it to invoke "nice -n ... make ..."? 
Or maybe at least have the knob

that disables automatic niceness?

Thanks,
-Aleksey







Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Roman Kennke



> On 2018-11-01 11:54, Aleksey Shipilev wrote:
>> On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
>>> But then again, it might just signal that the list of headers
>>> included in the PCH is no longer
>>> optimal. If it used to be the case that ~100 header files were so
>>> interlinked, that changing any of
>>> them caused recompilation of all files that included it and all the
>>> other 100 header files on the
>>> PCH list, then there was a net gain for using PCH and no "punishment".
>>>
>>> But nowadays this list might be far too large. Perhaps there's just
>>> only a core set of say 20 header
>>> files that are universally (or almost universally) included, and
>>> that's all that should be in the
>>> PCH list then. My guess is that, with a proper selection of header
>>> files, PCH will still be a benefit.
>> I agree. This smells like inefficient PCH list. We can improve that,
>> but I think that would be a
>> lower priority, given the abundance of CPU power we use to compile
>> Hotspot. In my mind, the decisive
>> factor for disabling PCH is to keep proper includes at all times,
>> without masking it with PCH. Half
>> of the trivial bugs I submit against hotspot are #include differences
>> that show up in CI that builds
>> without PCH.
>>
>> So this is my ideal world:
>>   a) Efficient PCH list enabled by default for development pleasure;
>>   b) CIs build without PCH all the time (jdk-submit tier1 included!);
>>
>> Since we don't yet have (a), and (b) seems to be tedious, regardless
>> how many times both Red Hat and
>> SAP people ask for it, disabling PCH by default feels like a good
>> fallback.
> 
> Should just CI builds default to non-PCH, or all builds (that is, should
> "configure" default to non-PCH on linux)? Maybe the former is better --
> one thing that the test numbers here has not shown is if incremental
> recompiles are improved by PCH. My gut feeling is that they really
> should -- once you've created your PCH, subsequent recompiles will be
> faster. So the developer default should perhaps be to keep PCH, and we
> should only configure the CI builds to do without PCH.

I don't know. I usually disable PCH to avoid getting bad surprises once
my patch goes through CI. I think this should be consistent. I waste
more time building with and without PCH and fixing those bugs than I
save (if anything) a few seconds build time.

Roman



What to include in the PCH list -- statistics on hotspot header files

2018-11-01 Thread Magnus Ihse Bursie
Prompted by a discussion on build-dev if the current set of header files 
included in the PCH are optimal (which it most definitely is not!), I 
collected some statistics. Here's a list of the current set of header 
files in hotspot with the number of times it is included, directly or 
indirectly, in a C++ file (as determined by the gcc dependency 
tracking). It is sorted in descending order, so for example 
allocation.hpp is included 586 times, while globalDefinitions.hpp is 
included 579 times. For comparison, there are 922 compilation units (C++ 
files).


All numbers is from my linux build, so they might differ slightly 
between platforms or build configurations, but the overall picture is 
likely to remain the same.


One way to tackle this information is to conclude that almost single 
header file is ever included in more than 50% of the compilations, so 
PCHs are not worth it at all. Another way is to conclude that there are 
34 header files that are each included more than 150 times, and that 
means that it is reasonable to include it in the PCH. Or or some other 
cutoff, like 13 files over 300 times; this looks like a Pareto 
distribution, so the fewer files you include, they will be exponentially 
more useful.


/Magnus
hotspot/share/memory/allocation.hpp: 586
hotspot/share/utilities/globalDefinitions.hpp: 579
hotspot/share/runtime/os.hpp: 444
hotspot/share/utilities/align.hpp: 434
hotspot/share/utilities/macros.hpp: 407
hotspot/share/runtime/atomic.hpp: 390
hotspot/share/oops/oop.inline.hpp: 374
hotspot/share/services/memTracker.hpp: 361
hotspot/share/memory/resourceArea.hpp: 354
hotspot/share/memory/allocation.inline.hpp: 339
hotspot/share/logging/log.hpp: 311
hotspot/share/utilities/debug.hpp: 303
hotspot/share/oops/oop.hpp: 303
hotspot/share/utilities/growableArray.hpp: 285
hotspot/share/classfile/systemDictionary.hpp: 244
hotspot/share/oops/oopsHierarchy.hpp: 237
hotspot/share/utilities/ostream.hpp: 234
hotspot/share/runtime/handles.hpp: 227
hotspot/share/memory/universe.hpp: 226
hotspot/share/runtime/thread.hpp: 215
hotspot/share/oops/access.inline.hpp: 212
hotspot/share/gc/shared/collectedHeap.hpp: 212
hotspot/share/oops/instanceKlass.hpp: 204
hotspot/share/classfile/javaClasses.hpp: 197
hotspot/share/runtime/orderAccess.hpp: 196
hotspot/share/runtime/timer.hpp: 192
hotspot/share/runtime/handles.inline.hpp: 192
hotspot/share/oops/objArrayOop.hpp: 179
hotspot/share/runtime/globals.hpp: 175
hotspot/share/memory/iterator.hpp: 168
hotspot/share/utilities/copy.hpp: 163
hotspot/share/utilities/ticks.hpp: 156
hotspot/share/oops/method.hpp: 156
hotspot/share/oops/klass.hpp: 153
hotspot/share/runtime/mutex.hpp: 150
hotspot/share/memory/memRegion.hpp: 149
hotspot/share/classfile/classLoaderData.hpp: 149
hotspot/share/gc/shared/gcCause.hpp: 147
hotspot/share/utilities/bitMap.hpp: 140
hotspot/share/utilities/exceptions.hpp: 137
hotspot/share/oops/objArrayKlass.hpp: 131
hotspot/share/runtime/thread.inline.hpp: 128
hotspot/share/utilities/bitMap.inline.hpp: 122
hotspot/share/oops/arrayOop.inline.hpp: 122
hotspot/share/gc/shared/gcId.hpp: 122
hotspot/share/runtime/mutexLocker.hpp: 121
hotspot/share/oops/objArrayOop.inline.hpp: 118
hotspot/share/classfile/javaClasses.inline.hpp: 116
hotspot/share/memory/referenceType.hpp: 115
java.base/jni.h: 114
hotspot/share/oops/weakHandle.hpp: 114
hotspot/share/oops/compressedOops.inline.hpp: 114
hotspot/share/gc/shared/barrierSet.hpp: 113
hotspot/share/utilities/stack.hpp: 112
hotspot/share/gc/g1/g1YCTypes.hpp: 111
hotspot/share/memory/padded.hpp: 110
hotspot/share/logging/logHandle.hpp: 110
hotspot/share/runtime/deoptimization.hpp: 109
hotspot/share/gc/shared/workgroup.hpp: 109
hotspot/share/interpreter/interpreter.hpp: 108
hotspot/share/gc/shared/collectorCounters.hpp: 108
hotspot/share/gc/shared/taskqueue.hpp: 107
hotspot/share/services/memoryUsage.hpp: 106
hotspot/share/services/memoryService.hpp: 104
hotspot/share/runtime/vm_operations.hpp: 104
hotspot/share/runtime/arguments.hpp: 102
hotspot/share/oops/typeArrayKlass.hpp: 102
hotspot/share/memory/metaspace.hpp: 102
hotspot/share/classfile/vmSymbols.hpp: 101
hotspot/share/oops/typeArrayOop.hpp: 100
hotspot/share/oops/instanceMirrorKlass.hpp: 100
hotspot/share/services/memoryManager.hpp: 99
hotspot/share/oops/oopHandle.hpp: 99
hotspot/share/gc/shared/oopStorageParState.hpp: 98
hotspot/share/runtime/safepoint.hpp: 97
hotspot/share/gc/shared/gcUtil.hpp: 97
hotspot/share/oops/arrayKlass.hpp: 95
hotspot/share/utilities/stack.inline.hpp: 94
hotspot/share/runtime/sharedRuntime.hpp: 94
hotspot/share/runtime/jniHandles.hpp: 92
hotspot/share/gc/shared/referenceProcessor.hpp: 92
hotspot/share/logging/logLevel.hpp: 91
hotspot/share/runtime/signature.hpp: 90
hotspot/share/logging/logStream.hpp: 90
hotspot/share/oops/symbol.hpp: 89
hotspot/share/oops/instanceRefKlass.hpp: 89
hotspot/share/gc/shared/preservedMarks.hpp: 88
hotspot/share/runtime/prefetch.inline.hpp: 87
hotspot/share/include/jvm.h: 86

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie

On 2018-11-01 11:54, Aleksey Shipilev wrote:

On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:

But then again, it might just signal that the list of headers included in the 
PCH is no longer
optimal. If it used to be the case that ~100 header files were so interlinked, 
that changing any of
them caused recompilation of all files that included it and all the other 100 
header files on the
PCH list, then there was a net gain for using PCH and no "punishment".

But nowadays this list might be far too large. Perhaps there's just only a core 
set of say 20 header
files that are universally (or almost universally) included, and that's all 
that should be in the
PCH list then. My guess is that, with a proper selection of header files, PCH 
will still be a benefit.

I agree. This smells like inefficient PCH list. We can improve that, but I 
think that would be a
lower priority, given the abundance of CPU power we use to compile Hotspot. In 
my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, without 
masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include differences that show 
up in CI that builds
without PCH.

So this is my ideal world:
  a) Efficient PCH list enabled by default for development pleasure;
  b) CIs build without PCH all the time (jdk-submit tier1 included!);

Since we don't yet have (a), and (b) seems to be tedious, regardless how many 
times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good fallback.


Should just CI builds default to non-PCH, or all builds (that is, should 
"configure" default to non-PCH on linux)? Maybe the former is better -- 
one thing that the test numbers here has not shown is if incremental 
recompiles are improved by PCH. My gut feeling is that they really 
should -- once you've created your PCH, subsequent recompiles will be 
faster. So the developer default should perhaps be to keep PCH, and we 
should only configure the CI builds to do without PCH.


/Magnus




-Aleksey





Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Aleksey Shipilev
On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
> But then again, it might just signal that the list of headers included in the 
> PCH is no longer
> optimal. If it used to be the case that ~100 header files were so 
> interlinked, that changing any of
> them caused recompilation of all files that included it and all the other 100 
> header files on the
> PCH list, then there was a net gain for using PCH and no "punishment".
> 
> But nowadays this list might be far too large. Perhaps there's just only a 
> core set of say 20 header
> files that are universally (or almost universally) included, and that's all 
> that should be in the
> PCH list then. My guess is that, with a proper selection of header files, PCH 
> will still be a benefit.

I agree. This smells like inefficient PCH list. We can improve that, but I 
think that would be a
lower priority, given the abundance of CPU power we use to compile Hotspot. In 
my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, without 
masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include differences that show 
up in CI that builds
without PCH.

So this is my ideal world:
 a) Efficient PCH list enabled by default for development pleasure;
 b) CIs build without PCH all the time (jdk-submit tier1 included!);

Since we don't yet have (a), and (b) seems to be tedious, regardless how many 
times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good fallback.

-Aleksey



Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie

On 2018-10-30 21:17, Aleksey Shipilev wrote:

On 10/30/2018 06:26 PM, Ioi Lam wrote:

Is there any advantage of using precompiled headers on Linux?

I have measured it recently on shenandoah repositories, and fastdebug/release 
build times have not
improved with or without PCH. Actually, it gets worse when you touch a single 
header that is in PCH
list, and you end up recompiling the entire Hotspot. I would be in favor of 
disabling it by default.
Not long ago, the hotspot header files were a mess, so you almost always 
ended up in recompiling all of hotspot regardless, when you changed a 
header file. If this situation has improved, then certainly it might 
have shifted the balance between gains and losses for PCH.


But then again, it might just signal that the list of headers included 
in the PCH is no longer optimal. If it used to be the case that ~100 
header files were so interlinked, that changing any of them caused 
recompilation of all files that included it and all the other 100 header 
files on the PCH list, then there was a net gain for using PCH and no 
"punishment".


But nowadays this list might be far too large. Perhaps there's just only 
a core set of say 20 header files that are universally (or almost 
universally) included, and that's all that should be in the PCH list 
then. My guess is that, with a proper selection of header files, PCH 
will still be a benefit.


/Magnus





It's on by default and we keep having
breakage where someone would forget to add #include. The latest instance is 
JDK-8213148.

Yes, we catch most of these breakages in CIs. Which tells me adding it to 
jdk-submit would cover
most of the breakage during pre-integration testing.

-Aleksey





Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie




On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 2013), 
3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the -npch is 
without precompiled headers. Here we see a slight degradation when 
disabling on both user time and wall clock time. My guess is that the 
user time increase is about the same, but because of a lower cpu 
count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that the 
precompiled header is generated alone on one core while nothing else 
is happening, which would explain this degradation in build speed. If 
we were instead building the whole product, we would see a better 
correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did with 
ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That the 
gain is small is no argument to lose it. (I remember a time when you 
were hunting seconds in the build time ;-))


On linux, the story seems different, though. People experience PCH as a 
problem, and there is a net loss of time, at least on selected testing 
machines. It makes sense to turn it off as default, then.


/Magnus



/Erik

macosx-x64
real     4m13.658s
user     27m17.595s
sys     2m11.306s

macosx-x64-npch
real     4m27.823s
user     30m0.434s
sys     2m18.669s

macosx-x64-debug
real     5m21.032s
user     35m57.347s
sys     2m20.588s

macosx-x64-debug-npch
real     5m33.728s
user     38m10.311s
sys     2m27.587s

macosx-x64-slowdebug
real     3m54.439s
user     25m32.197s
sys     2m8.750s

macosx-x64-slowdebug-npch
real     4m11.987s
user     27m59.857s
sys     2m18.093s


On 2018-10-30 14:00, Erik Joelsson wrote:

Hello,

On 2018-10-30 13:17, Aleksey Shipilev wrote:

On 10/30/2018 06:26 PM, Ioi Lam wrote:

Is there any advantage of using precompiled headers on Linux?
I have measured it recently on shenandoah repositories, and 
fastdebug/release build times have not
improved with or without PCH. Actually, it gets worse when you touch 
a single header that is in PCH
list, and you end up recompiling the entire Hotspot. I would be in 
favor of disabling it by default.
I just did a measurement on my local workstation (2x8 cores x2 ht 
Ubuntu 18.04 using Oracle devkit GCC 7.3.0). I ran "time make 
hotspot" with clean build directories.


linux-x64:
real    4m6.657s
user    61m23.090s
sys    6m24.477s

linux-x64-npch
real    3m41.130s
user    66m11.824s
sys    4m19.224s

linux-x64-debug
real    4m47.117s
user    75m53.740s
sys    8m21.408s

linux-x64-debug-npch
real    4m42.877s
user    84m30.764s
sys    4m54.666s

linux-x64-slowdebug
real    3m54.564s
user    44m2.828s
sys    6m22.785s

linux-x64-slowdebug-npch
real    3m23.092s
user    55m3.142s
sys    4m10.172s

These numbers support your claim. Wall clock time is actually 
increased with PCH enabled, but total user time is decreased. Does 
not seem worth it to me.

It's on by default and we keep having
breakage where someone would forget to add #include. The latest 
instance is JDK-8213148.
Yes, we catch most of these breakages in CIs. Which tells me adding 
it to jdk-submit would cover

most of the breakage during pre-integration testing.
jdk-submit is currently running what we call "tier1". We do have 
builds of Linux slowdebug with precompiled headers disabled in tier2. 
We also build solaris-sparcv9 in tier1 which does not support 
precompiled headers at all, so to not be caught in jdk-submit you 
would have to be in Linux specific code. The example bug does not 
seem to be that. Mach5/jdk-submit was down over the weekend and 
yesterday so my suspicion is the offending code in this case was 
never tested.


That said, given that we get practically no benefit from PCH on 
Linux/GCC, we should probably just turn it off by default for Linux 
and/or GCC. I think we need to investigate Macos as well here.


/Erik

-Aleksey









Re: minimal gcc version for jdk/jdk and usage of -Wno-int-in-bool-context flag in HS build

2018-11-01 Thread Magnus Ihse Bursie




On 2018-10-31 17:17, Baesken, Matthias wrote:

Hi  Aleksey ,  I tried  with  gcc 6.2   not  6.3  maybe this makes a difference 
.

Unfortunately I cannot see  your  exact gcc/g++  compile calls from the logfile 
.
Do you have  "-Wall"  set  which seems to be necessary to enable  the 
additional flag ?

https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html

-Wint-in-bool-context
Warn for suspicious use of integer values where boolean values are expected, such 
as conditional expressions (?:) using non-boolean integer constants in boolean 
context, like if (a <= b ? 2 : 3).
Or left shifting of signed integers in boolean context, like for (a = 0; 1 << 
a; a++);. Likewise for all kinds of multiplications regardless of the data type.
This warning is enabled by -Wall.


Since JDK-8211029, we have -Wall enabled for hotspot on gcc.

I can only reiterate what Erik says. Gcc, since a long time ago, 
silently ignores -Wno-XXX for XXX that it does not support or 
understand. *However*, if *another* failure occurs, then gcc prints out 
warnings for unknown -Wno-XXX flags as well. So those are just red 
herrings. If you see them, you will also see another error, and that is 
your real problem.


/Magnus




The   configure flag  --with-extra-cflags   is not used by us .

Best regards, Matthias




-Original Message-
From: Aleksey Shipilev 
Sent: Mittwoch, 31. Oktober 2018 16:49
To: Baesken, Matthias ; 'build-
d...@openjdk.java.net' 
Subject: Re: minimal gcc version for jdk/jdk and usage of -Wno-int-in-bool-
context flag in HS build

On 10/31/2018 04:40 PM, Baesken, Matthias wrote:
https://hg.openjdk.java.net/jdk/jdk/file/896e80158d35/make/hotspot/lib/C
ompileJvm.gmk

DISABLED_WARNINGS_gcc := extra parentheses comment unknown-

pragmas address \

 delete-non-virtual-dtor char-subscripts array-bounds int-in-bool-context

\

 ignored-qualifiers  missing-field-initializers implicit-fallthrough \
 empty-body strict-overflow sequence-point maybe-uninitialized \
 misleading-indentation


However   int-in-bool-context   (  -Wnoint-in-bool-context  )   seems to be

available only  in gcc 7 + .

Example call  with gcc  6  leads to a warning  :

Current jdk/jdk build works fine with 6.3.0 for me:

https://builds.shipilev.net/openjdk-jdk/openjdk-jdk-b441-20181029-jdk-
12+17-linux-x86_64-fastdebug.configure.log

https://builds.shipilev.net/openjdk-jdk/openjdk-jdk-b441-20181029-jdk-
12+17-linux-x86_64-fastdebug.build.log

Is this the indication that build system actually probes the warnings before
trying warning options?

The troubles start when you supply additional options, apparently:
   https://bugs.openjdk.java.net/browse/JDK-8211088

-Aleksey




Re: RFR: JDK-8213237 Remove test-compile-commands from jib-profiles.js

2018-11-01 Thread Magnus Ihse Bursie

On 2018-11-01 11:26, Lance Andersen wrote:

+1

Thanks, Lance!

/Magnus

On Nov 1, 2018, at 6:06 AM, Magnus Ihse Bursie 
> wrote:


In JDK-8210958, a broken version of jib-profiles.js was unfortunately 
checked in.


The change at 
http://cr.openjdk.java.net/~ihse/JDK-8210958-rename-run-test-to-test/webrev.02/make/conf/jib-profiles.js.udiff.html, 
as reviewed, was unfortunately reverted in the process.


Without this fix, the *-testmake profiles fails when run.

Bug: https://bugs.openjdk.java.net/browse/JDK-8213237
Patch inline:
diff --git a/make/conf/jib-profiles.js b/make/conf/jib-profiles.js
--- a/make/conf/jib-profiles.js
+++ b/make/conf/jib-profiles.js
@@ -485,7 +485,7 @@
 .forEach(function (name) {
 var maketestName = name + "-testmake";
 profiles[maketestName] = concatObjects(profiles[name], 
testmakeBase);
-    profiles[maketestName].default_make_targets = [ 
"test-make", "test-compile-commands" ];
+    profiles[maketestName].default_make_targets = [ 
"test-make" ];

 });

 // Profiles for building the zero jvm variant. These are used 
for verification.


/Magnus
(Sorry for the mess :-()




Lance 
Andersen| Principal Member of Technical Staff | +1.781.442.2037

Oracle Java Engineering
1 Network Drive
Burlington, MA 01803
lance.ander...@oracle.com 







Re: RFR: JDK-8213237 Remove test-compile-commands from jib-profiles.js

2018-11-01 Thread Lance Andersen
+1
> On Nov 1, 2018, at 6:06 AM, Magnus Ihse Bursie 
>  wrote:
> 
> In JDK-8210958, a broken version of jib-profiles.js was unfortunately checked 
> in.
> 
> The change at 
> http://cr.openjdk.java.net/~ihse/JDK-8210958-rename-run-test-to-test/webrev.02/make/conf/jib-profiles.js.udiff.html,
>  as reviewed, was unfortunately reverted in the process.
> 
> Without this fix, the *-testmake profiles fails when run.
> 
> Bug: https://bugs.openjdk.java.net/browse/JDK-8213237
> Patch inline:
> diff --git a/make/conf/jib-profiles.js b/make/conf/jib-profiles.js
> --- a/make/conf/jib-profiles.js
> +++ b/make/conf/jib-profiles.js
> @@ -485,7 +485,7 @@
>  .forEach(function (name) {
>  var maketestName = name + "-testmake";
>  profiles[maketestName] = concatObjects(profiles[name], 
> testmakeBase);
> -profiles[maketestName].default_make_targets = [ "test-make", 
> "test-compile-commands" ];
> +profiles[maketestName].default_make_targets = [ "test-make" ];
>  });
> 
>  // Profiles for building the zero jvm variant. These are used for 
> verification.
> 
> /Magnus
> (Sorry for the mess :-()

 
  

 Lance Andersen| 
Principal Member of Technical Staff | +1.781.442.2037
Oracle Java Engineering 
1 Network Drive 
Burlington, MA 01803
lance.ander...@oracle.com 





RFR: JDK-8213237 Remove test-compile-commands from jib-profiles.js

2018-11-01 Thread Magnus Ihse Bursie
In JDK-8210958, a broken version of jib-profiles.js was unfortunately 
checked in.


The change at 
http://cr.openjdk.java.net/~ihse/JDK-8210958-rename-run-test-to-test/webrev.02/make/conf/jib-profiles.js.udiff.html, 
as reviewed, was unfortunately reverted in the process.


Without this fix, the *-testmake profiles fails when run.

Bug: https://bugs.openjdk.java.net/browse/JDK-8213237
Patch inline:
diff --git a/make/conf/jib-profiles.js b/make/conf/jib-profiles.js
--- a/make/conf/jib-profiles.js
+++ b/make/conf/jib-profiles.js
@@ -485,7 +485,7 @@
 .forEach(function (name) {
 var maketestName = name + "-testmake";
 profiles[maketestName] = concatObjects(profiles[name], 
testmakeBase);
-    profiles[maketestName].default_make_targets = [ 
"test-make", "test-compile-commands" ];

+    profiles[maketestName].default_make_targets = [ "test-make" ];
 });

 // Profiles for building the zero jvm variant. These are used for 
verification.


/Magnus
(Sorry for the mess :-()