Re: jdk10 bootcycle build linux failing

2018-03-05 Thread Michal Vala

Hi Erik,

I've played also with --with-num-cores. It seems like it has an effect only on 
first build, but no effect on bootcycle build (same as --with-memory-size). Is 
this behavior intended or is it a bug? Or am I doing something wrong?


I haven't tried "make JOBS=x" yet. I guess it may solve the issue, but I would 
expect that --with-num-cores should cover this.


Thanks

On 03/01/2018 10:59 PM, Erik Joelsson wrote:

Hello Michal,

The --with-memory-size option is only used to help configure pick a more 
reasonable concurrency level for make. If your machine still gets overwhelmed, I 
would recommend manually turning down concurrency until you get it to work, 
using "make JOBS=X". You can check the end of the configure output for how many 
concurrent jobs configure set for you and work down from there.


/Erik


On 2018-03-01 03:36, Michal Vala wrote:

Hi,

I have an issue with bootcycle build. First run with external boot jdk is ok, 
but bootcycle phase always fails. I'm building on fedora27(local) and 
rhel7(VM). In case of local, build consume too much memory and kill my OS. 
I've tried to set `with-memory-size` but it seems like it accepts this value 
just for first build, but not for bootcycle. I'm building with:


$ bash ./configure --disable-warnings-as-errors 
--with-boot-jdk=/home/tester/jdk-9.0.4

$ make bootcycle-images

I suspect that build eats too much memory and OS kills some process, but I'm 
lack of knowledge here. Anyone else meeting similar issue?


end of build log inlined here. I can provide more, but I was not able to find 
any useful log output:


collect2: error: ld terminated with signal 9 [Killed]
gmake[5]: *** 
[/home/tester/openjdk/build/linux-x86_64-normal-server-release/bootcycle-build/hotspot/variant-server/libjvm/gtest/libjvm.so] 
Error 1
gmake[5]: *** Deleting file 
`/home/tester/openjdk/build/linux-x86_64-normal-server-release/bootcycle-build/hotspot/variant-server/libjvm/gtest/libjvm.so' 


gmake[5]: *** Waiting for unfinished jobs
gmake[4]: *** [hotspot-server-libs] Error 1

ERROR: Build failed for target 'product-images' in configuration 
'linux-x86_64-normal-server-release' (exit code 2)

gmake[4]: warning: -jN forced in submake: disabling jobserver mode.
Stopping sjavac server

=== Output from failing command(s) repeated here ===
* For target hotspot_variant-server_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link:
collect2: error: ld terminated with signal 9 [Killed]

* All command lines available in 
/home/tester/openjdk/build/linux-x86_64-normal-server-release/bootcycle-build/make-support/failure-logs. 


=== End of repeated output ===

No indication of failed target found.
Hint: Try searching the build log for '] Error'.
Hint: See doc/building.html#troubleshooting for assistance.

gmake[3]: *** [main] Error 1
gmake[2]: *** [bootcycle-images] Error 2

ERROR: Build failed for target 'bootcycle-images' in configuration 
'linux-x86_64-normal-server-release' (exit code 2)


No indication of failed target found.
Hint: Try searching the build log for '] Error'.
Hint: See doc/building.html#troubleshooting for assistance.

make[1]: *** [main] Error 1
make: *** [bootcycle-images] Error 2





--
Michal Vala
OpenJDK QE
Red Hat Czech


Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread Erik Joelsson

Hello,

On 2018-03-04 15:22, David Holmes wrote:

Hi Erik,

On 3/03/2018 7:18 AM, Erik Joelsson wrote:

Hello,

Here is a new version of this patch, reworked in several ways. It now 
supports gcc, clang and solstudio. It uses nm instead of objdump, 
which is more readily available in all our current build 
environments. The check now uses mangled symbol names for all 
toolchain types which makes things considerably faster. I also added 
a check for only finding symbols classified as undefined. Otherwise 
we get false positives in operator_new.o in debug builds.


Sounds great! Thanks for doing the additional work on this.

I left the objdump addition to the devkit in there because it's a 
good thing to have anyway for compare.sh.


Webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.02/


In toolkit.m4:

+   case $TOOLCHAIN_TYPE in
+ gcc|clang|solstudio)
    BASIC_CHECK_TOOLS(OBJDUMP, [gobjdump objdump])
    if test "x$OBJDUMP" != x; then
  # Only used for compare.sh; we can live without it. 
BASIC_FIXUP_EXECUTABLE

  # bails if argument is missing.
  BASIC_FIXUP_EXECUTABLE(OBJDUMP)
    fi
+   BASIC_FIXUP_EXECUTABLE(OBJDUMP)

Isn't the if-clause redundant now that you unconditionally call 
BASIC_FIXUP_EXECUTABLE(OBJDUMP)?


Yes, I seem to have accidentally left the second line there. My 
intention was to restore the conditional now that objdump is no longer 
used for this new check.


Here is a new webrev with the second fixup call removed:

http://cr.openjdk.java.net/~erikj/8198243/webrev.03/

/Erik

Thanks,
David

I have run this patch against current jdk/jdk and jdk/hs in Mach5. In 
jdk/jdk the build fails as expected on Solaris, Linux and Mac and in 
jdk/hs, where the fixes we are detecting are present, all builds 
succeeds.


/Erik


On 2018-02-23 05:27, Magnus Ihse Bursie wrote:

On 2018-02-22 20:41, Erik Joelsson wrote:



On 2018-02-21 21:06, David Holmes wrote:

On 22/02/2018 4:07 AM, Erik Joelsson wrote:

Hello,


On 2018-02-20 21:33, David Holmes wrote:


a) how much time it adds to the build?

I have not done extensive testing, but on my Linux workstation 
with 32 hw threads, building just hotspot release build from a 
clean workspace increased maybe 1 or 2 seconds (at around 90s 
total), but the variance was around the same amount as that.

b) why this doesn't work for Solaris Studio?

I didn't put a lot of effort into trying to figure it out. The 
check used was provided by Kim Barrett, for Linux only. I figured 
it would be simple enough to get it to work on mac and succeeded 
there. It should certainly be possible to implement a similar 
check on Solaris, but is it worth the time to do it? Both 
development time and increased build time on one of the slower 
build platforms?


Depends how concerned we are with detecting this problem in OS 
specific source code?


I investigated this some more. I was able to do it successfully, 
but the build time cost is way too large here. The culprit is 
c++filt on Solaris which is incredibly costly, and the gnu version 
does not demangle Solaris Studio symbols. I don't think we should 
do this on Solaris.

I agree, it's not worth it.

Not all programmer's mistakes are reasonable to catch in technical 
traps. It we *should* spend time on getting automatic tool for 
keeping code quality up (and, yes, I really do think we should), 
there's most likely to be much better areas to spend that effort in, 
in making a lot of prone-to-break scripts for catching a single kind 
of problem.


/Magnus



We could grep for the mangled strings for the operators instead, 
which is super fast. Problem is just figuring out all the possible 
combinations.


/Erik

To be honest I'm not sure this pulls its own weight regardless.

David


/Erik

Thanks,
David

On 21/02/2018 4:05 AM, Erik Joelsson wrote:

Hello,

This patch adds a build time check for uses of global operators 
new and delete in hotspot C++ code. The check is only run with 
toolchains GCC and Clang (Linux and Macos builds). I have also 
modified the Oracle devkit on Linux to add the necessary 
symlink so that objdump will get picked up by configure.


This change is depending on several fixes removing such uses 
that are currently in jdk/hs so this change will need to be 
pushed there as well.


Bug: https://bugs.openjdk.java.net/browse/JDK-8198243

Webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.01/

/Erik













Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread Tim Bell

Erik:

make/devkit/Tools.gmk

Lines 565, 566 - I see you removed dtrace from this list as that is 
dealt with up in lines 556 ... 558.  Makes sense to me.


Looks good.

Tim



On 2018-03-04 15:22, David Holmes wrote:

Hi Erik,

On 3/03/2018 7:18 AM, Erik Joelsson wrote:

Hello,

Here is a new version of this patch, reworked in several ways. It now
supports gcc, clang and solstudio. It uses nm instead of objdump,
which is more readily available in all our current build
environments. The check now uses mangled symbol names for all
toolchain types which makes things considerably faster. I also added
a check for only finding symbols classified as undefined. Otherwise
we get false positives in operator_new.o in debug builds.


Sounds great! Thanks for doing the additional work on this.


I left the objdump addition to the devkit in there because it's a
good thing to have anyway for compare.sh.

Webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.02/


In toolkit.m4:

+   case $TOOLCHAIN_TYPE in
+ gcc|clang|solstudio)
BASIC_CHECK_TOOLS(OBJDUMP, [gobjdump objdump])
if test "x$OBJDUMP" != x; then
  # Only used for compare.sh; we can live without it.
BASIC_FIXUP_EXECUTABLE
  # bails if argument is missing.
  BASIC_FIXUP_EXECUTABLE(OBJDUMP)
fi
+   BASIC_FIXUP_EXECUTABLE(OBJDUMP)

Isn't the if-clause redundant now that you unconditionally call
BASIC_FIXUP_EXECUTABLE(OBJDUMP)?


Yes, I seem to have accidentally left the second line there. My
intention was to restore the conditional now that objdump is no longer
used for this new check.

Here is a new webrev with the second fixup call removed:

http://cr.openjdk.java.net/~erikj/8198243/webrev.03/

/Erik

Thanks,
David


I have run this patch against current jdk/jdk and jdk/hs in Mach5. In
jdk/jdk the build fails as expected on Solaris, Linux and Mac and in
jdk/hs, where the fixes we are detecting are present, all builds
succeeds.

/Erik


On 2018-02-23 05:27, Magnus Ihse Bursie wrote:

On 2018-02-22 20:41, Erik Joelsson wrote:



On 2018-02-21 21:06, David Holmes wrote:

On 22/02/2018 4:07 AM, Erik Joelsson wrote:

Hello,


On 2018-02-20 21:33, David Holmes wrote:


a) how much time it adds to the build?


I have not done extensive testing, but on my Linux workstation
with 32 hw threads, building just hotspot release build from a
clean workspace increased maybe 1 or 2 seconds (at around 90s
total), but the variance was around the same amount as that.

b) why this doesn't work for Solaris Studio?


I didn't put a lot of effort into trying to figure it out. The
check used was provided by Kim Barrett, for Linux only. I figured
it would be simple enough to get it to work on mac and succeeded
there. It should certainly be possible to implement a similar
check on Solaris, but is it worth the time to do it? Both
development time and increased build time on one of the slower
build platforms?


Depends how concerned we are with detecting this problem in OS
specific source code?


I investigated this some more. I was able to do it successfully,
but the build time cost is way too large here. The culprit is
c++filt on Solaris which is incredibly costly, and the gnu version
does not demangle Solaris Studio symbols. I don't think we should
do this on Solaris.

I agree, it's not worth it.

Not all programmer's mistakes are reasonable to catch in technical
traps. It we *should* spend time on getting automatic tool for
keeping code quality up (and, yes, I really do think we should),
there's most likely to be much better areas to spend that effort in,
in making a lot of prone-to-break scripts for catching a single kind
of problem.

/Magnus



We could grep for the mangled strings for the operators instead,
which is super fast. Problem is just figuring out all the possible
combinations.

/Erik

To be honest I'm not sure this pulls its own weight regardless.

David


/Erik

Thanks,
David

On 21/02/2018 4:05 AM, Erik Joelsson wrote:

Hello,

This patch adds a build time check for uses of global operators
new and delete in hotspot C++ code. The check is only run with
toolchains GCC and Clang (Linux and Macos builds). I have also
modified the Oracle devkit on Linux to add the necessary
symlink so that objdump will get picked up by configure.

This change is depending on several fixes removing such uses
that are currently in jdk/hs so this change will need to be
pushed there as well.

Bug: https://bugs.openjdk.java.net/browse/JDK-8198243

Webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.01/

/Erik





Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread Kim Barrett
> On Mar 5, 2018, at 11:41 AM, Erik Joelsson  wrote:
> 
> Hello,
> 
> On 2018-03-04 15:22, David Holmes wrote:
>> Hi Erik,
>> 
>> On 3/03/2018 7:18 AM, Erik Joelsson wrote:
>>> Hello,
>>> 
>>> Here is a new version of this patch, reworked in several ways. It now 
>>> supports gcc, clang and solstudio. It uses nm instead of objdump, which is 
>>> more readily available in all our current build environments. The check now 
>>> uses mangled symbol names for all toolchain types which makes things 
>>> considerably faster. I also added a check for only finding symbols 
>>> classified as undefined. Otherwise we get false positives in operator_new.o 
>>> in debug builds.
>> 
>> Sounds great! Thanks for doing the additional work on this.
>> 
>>> I left the objdump addition to the devkit in there because it's a good 
>>> thing to have anyway for compare.sh.
>>> 
>>> Webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.02/
>> 
>> In toolkit.m4:
>> 
>> +   case $TOOLCHAIN_TYPE in
>> + gcc|clang|solstudio)
>> BASIC_CHECK_TOOLS(OBJDUMP, [gobjdump objdump])
>> if test "x$OBJDUMP" != x; then
>>   # Only used for compare.sh; we can live without it. 
>> BASIC_FIXUP_EXECUTABLE
>>   # bails if argument is missing.
>>   BASIC_FIXUP_EXECUTABLE(OBJDUMP)
>> fi
>> +   BASIC_FIXUP_EXECUTABLE(OBJDUMP)
>> 
>> Isn't the if-clause redundant now that you unconditionally call 
>> BASIC_FIXUP_EXECUTABLE(OBJDUMP)?
>> 
> Yes, I seem to have accidentally left the second line there. My intention was 
> to restore the conditional now that objdump is no longer used for this new 
> check.
> 
> Here is a new webrev with the second fixup call removed:
> 
> http://cr.openjdk.java.net/~erikj/8198243/webrev.03/

Old code in toolkit.m4 did the OBJDUMP stuff unconditionally.  New code makes 
it conditional on TOOLCHAIN_TYPE.
That doesn’t seem right.  compare.sh seems to want OBJDUMP for aix too, and 
that’s not in the toolchain list here.

> 
> /Erik
>> Thanks,
>> David
>> 
>>> I have run this patch against current jdk/jdk and jdk/hs in Mach5. In 
>>> jdk/jdk the build fails as expected on Solaris, Linux and Mac and in 
>>> jdk/hs, where the fixes we are detecting are present, all builds succeeds.
>>> 
>>> /Erik
>>> 
>>> 
>>> On 2018-02-23 05:27, Magnus Ihse Bursie wrote:
 On 2018-02-22 20:41, Erik Joelsson wrote:
> 
> 
> On 2018-02-21 21:06, David Holmes wrote:
>> On 22/02/2018 4:07 AM, Erik Joelsson wrote:
>>> Hello,
>>> 
>>> 
>>> On 2018-02-20 21:33, David Holmes wrote:
 
 a) how much time it adds to the build?
 
>>> I have not done extensive testing, but on my Linux workstation with 32 
>>> hw threads, building just hotspot release build from a clean workspace 
>>> increased maybe 1 or 2 seconds (at around 90s total), but the variance 
>>> was around the same amount as that.
 b) why this doesn't work for Solaris Studio?
 
>>> I didn't put a lot of effort into trying to figure it out. The check 
>>> used was provided by Kim Barrett, for Linux only. I figured it would be 
>>> simple enough to get it to work on mac and succeeded there. It should 
>>> certainly be possible to implement a similar check on Solaris, but is 
>>> it worth the time to do it? Both development time and increased build 
>>> time on one of the slower build platforms?
>> 
>> Depends how concerned we are with detecting this problem in OS specific 
>> source code?
>> 
> I investigated this some more. I was able to do it successfully, but the 
> build time cost is way too large here. The culprit is c++filt on Solaris 
> which is incredibly costly, and the gnu version does not demangle Solaris 
> Studio symbols. I don't think we should do this on Solaris.
 I agree, it's not worth it.
 
 Not all programmer's mistakes are reasonable to catch in technical traps. 
 It we *should* spend time on getting automatic tool for keeping code 
 quality up (and, yes, I really do think we should), there's most likely to 
 be much better areas to spend that effort in, in making a lot of 
 prone-to-break scripts for catching a single kind of problem.
 
 /Magnus
 
> 
> We could grep for the mangled strings for the operators instead, which is 
> super fast. Problem is just figuring out all the possible combinations.
> 
> /Erik
>> To be honest I'm not sure this pulls its own weight regardless.
>> 
>> David
>> 
>>> /Erik
 Thanks,
 David
 
 On 21/02/2018 4:05 AM, Erik Joelsson wrote:
> Hello,
> 
> This patch adds a build time check for uses of global operators new 
> and delete in hotspot C++ code. The check is only run with toolchains 
> GCC and Clang (Linux and Macos builds). I have also modified the 
> Oracle devkit on Linux to add the neces

Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread Erik Joelsson


On 2018-03-05 11:46, Kim Barrett wrote:
Old code in toolkit.m4 did the OBJDUMP stuff unconditionally. New code 
makes it conditional on TOOLCHAIN_TYPE.

That doesn’t seem right.  compare.sh seems to want OBJDUMP for aix too, and 
that’s not in the toolchain list here.
That is a good point. I forgot about aix and was thinking more in the 
lines of not windows. Will restore the OBJDUMP code completely.


/Erik

/Erik

Thanks,
David


I have run this patch against current jdk/jdk and jdk/hs in Mach5. In jdk/jdk 
the build fails as expected on Solaris, Linux and Mac and in jdk/hs, where the 
fixes we are detecting are present, all builds succeeds.

/Erik


On 2018-02-23 05:27, Magnus Ihse Bursie wrote:

On 2018-02-22 20:41, Erik Joelsson wrote:


On 2018-02-21 21:06, David Holmes wrote:

On 22/02/2018 4:07 AM, Erik Joelsson wrote:

Hello,


On 2018-02-20 21:33, David Holmes wrote:

a) how much time it adds to the build?


I have not done extensive testing, but on my Linux workstation with 32 hw 
threads, building just hotspot release build from a clean workspace increased 
maybe 1 or 2 seconds (at around 90s total), but the variance was around the 
same amount as that.

b) why this doesn't work for Solaris Studio?


I didn't put a lot of effort into trying to figure it out. The check used was 
provided by Kim Barrett, for Linux only. I figured it would be simple enough to 
get it to work on mac and succeeded there. It should certainly be possible to 
implement a similar check on Solaris, but is it worth the time to do it? Both 
development time and increased build time on one of the slower build platforms?

Depends how concerned we are with detecting this problem in OS specific source 
code?


I investigated this some more. I was able to do it successfully, but the build 
time cost is way too large here. The culprit is c++filt on Solaris which is 
incredibly costly, and the gnu version does not demangle Solaris Studio 
symbols. I don't think we should do this on Solaris.

I agree, it's not worth it.

Not all programmer's mistakes are reasonable to catch in technical traps. It we 
*should* spend time on getting automatic tool for keeping code quality up (and, 
yes, I really do think we should), there's most likely to be much better areas 
to spend that effort in, in making a lot of prone-to-break scripts for catching 
a single kind of problem.

/Magnus


We could grep for the mangled strings for the operators instead, which is super 
fast. Problem is just figuring out all the possible combinations.

/Erik

To be honest I'm not sure this pulls its own weight regardless.

David


/Erik

Thanks,
David

On 21/02/2018 4:05 AM, Erik Joelsson wrote:

Hello,

This patch adds a build time check for uses of global operators new and delete 
in hotspot C++ code. The check is only run with toolchains GCC and Clang (Linux 
and Macos builds). I have also modified the Oracle devkit on Linux to add the 
necessary symlink so that objdump will get picked up by configure.

This change is depending on several fixes removing such uses that are currently 
in jdk/hs so this change will need to be pushed there as well.

Bug: https://bugs.openjdk.java.net/browse/JDK-8198243

Webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.01/

/Erik






Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread Erik Joelsson

New webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.04

/Erik


On 2018-03-05 12:03, Erik Joelsson wrote:


On 2018-03-05 11:46, Kim Barrett wrote:
Old code in toolkit.m4 did the OBJDUMP stuff unconditionally. New 
code makes it conditional on TOOLCHAIN_TYPE.
That doesn’t seem right.  compare.sh seems to want OBJDUMP for aix 
too, and that’s not in the toolchain list here.
That is a good point. I forgot about aix and was thinking more in the 
lines of not windows. Will restore the OBJDUMP code completely.


/Erik




Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread Tim Bell

Erik:

Looks good.

/Tim


New webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.04

/Erik


On 2018-03-05 12:03, Erik Joelsson wrote:


On 2018-03-05 11:46, Kim Barrett wrote:

Old code in toolkit.m4 did the OBJDUMP stuff unconditionally. New
code makes it conditional on TOOLCHAIN_TYPE.
That doesn’t seem right.  compare.sh seems to want OBJDUMP for aix
too, and that’s not in the toolchain list here.

That is a good point. I forgot about aix and was thinking more in the
lines of not windows. Will restore the OBJDUMP code completely.

/Erik






Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread Kim Barrett
> On Mar 5, 2018, at 3:06 PM, Erik Joelsson  wrote:
> 
> New webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.04

Looks good.

> 
> /Erik
> 
> 
> On 2018-03-05 12:03, Erik Joelsson wrote:
>> 
>> On 2018-03-05 11:46, Kim Barrett wrote:
>>> Old code in toolkit.m4 did the OBJDUMP stuff unconditionally. New code 
>>> makes it conditional on TOOLCHAIN_TYPE.
>>> That doesn’t seem right.  compare.sh seems to want OBJDUMP for aix too, and 
>>> that’s not in the toolchain list here.
>> That is a good point. I forgot about aix and was thinking more in the lines 
>> of not windows. Will restore the OBJDUMP code completely.
>> 
>> /Erik




Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread David Holmes

On 6/03/2018 6:06 AM, Erik Joelsson wrote:

New webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.04


Looks good.

Thanks,
David


/Erik


On 2018-03-05 12:03, Erik Joelsson wrote:


On 2018-03-05 11:46, Kim Barrett wrote:
Old code in toolkit.m4 did the OBJDUMP stuff unconditionally. New 
code makes it conditional on TOOLCHAIN_TYPE.
That doesn’t seem right.  compare.sh seems to want OBJDUMP for aix 
too, and that’s not in the toolchain list here.
That is a good point. I forgot about aix and was thinking more in the 
lines of not windows. Will restore the OBJDUMP code completely.


/Erik




Re: [OpenJDK 2D-Dev] RFR: JDK-8198844 Clean up GensrcX11Wrappers

2018-03-05 Thread Phil Race

>I'm not sure what role the "verification" step we had before ever played.
>For all the years we've been "verifying" this, we've detected no 
differences.


I think this is useful in the event that you make some changes and
regenerate the 64 bit sizes but not 32 bit.

For example this old bug similarly reported a breakage on Solaris ..
https://bugs.openjdk.java.net/browse/JDK-6804680

... back in the day when we only had that checked in and the ones used 
for Linux
were being generated on the fly. My reading of the history here is 
sizes.32 and sizes.64
were added to support cross-compilation, and the verification step meant 
that  all

architectures would get checked in some build or other.

We care less about 32 bit now .. and even the cross-compilation .. so  
arguably
we could go back to always generating the files, as was the case for 
Linux before

https://bugs.openjdk.java.net/browse/JDK-8004151

But if we say we still want to keep around the possibility of 32 bit 
support AND

cross-compilation, then the verification step still seems useful.

If we say we care only about 32 bit or do not care about cross 
compilation, then

why not get rid of the checked in file and calculate these every build ?

The clean up of removing the solaris specific seems like it could have been
done a long time ago .. I am not sure why was ever only this one case there.
I'd have to dig back a very long way.

I do agree we do not need to support 32 + 64 bit concurrently, that went
away when 64 bit solaris overlay on 32 bit was dropped.

-phil.

On 03/02/2018 07:23 AM, Erik Joelsson wrote:
Adding 2d-dev in the hopes of getting some input from component team. 
Seems like they should be aware of us removing the support for 
multiple data models.


Looks like you left a debug message at line 40 in GensrcX11Wrappers.gmk.

/Erik

On 2018-03-02 03:00, Magnus Ihse Bursie wrote:

On 2018-03-02 00:02, Erik Joelsson wrote:

Hello,

In xlibtypes.txt comment, should it be sizes-64.txt?


Yes, good catch.



Generating both 32 and 64 seems a bit outdated at this point. Surely 
this is a remnant of bundling 64 and 32 bit together on Solaris in 
the past? Perhaps someone in 2d can answer this? Would be nice to be 
able to clean up that part as well if possible.
Yes, you are right. We should clean up that as well. I was just too 
conservative. I've now actually checked what happens when you 
generate both 32 and 64 bit versions, and the result is that instead 
of code like:

public static int getSize() { return 96; }
we get code like this:
public static int getSize() { return ((XlibWrapper.dataModel == 
32)?(80):(96)); }


Since we do no longer support multiple data models for the same 
build, this is just unnecessary. In fact, that leads to an even 
better cleanup, since we will always need just a single input file.


I also wrapped the tool calls in ExecuteWithLog.

Updated webrev:
http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.02 



/Magnus







Re: RFR: JDK-8198243: Add build time check for global operator new/delete in object files

2018-03-05 Thread Magnus Ihse Bursie


On 2018-03-05 21:06, Erik Joelsson wrote:

New webrev: http://cr.openjdk.java.net/~erikj/8198243/webrev.04

Looks good to me.

/Magnus



/Erik


On 2018-03-05 12:03, Erik Joelsson wrote:


On 2018-03-05 11:46, Kim Barrett wrote:
Old code in toolkit.m4 did the OBJDUMP stuff unconditionally. New 
code makes it conditional on TOOLCHAIN_TYPE.
That doesn’t seem right.  compare.sh seems to want OBJDUMP for aix 
too, and that’s not in the toolchain list here.
That is a good point. I forgot about aix and was thinking more in the 
lines of not windows. Will restore the OBJDUMP code completely.


/Erik






Re: [OpenJDK 2D-Dev] RFR: JDK-8198844 Clean up GensrcX11Wrappers

2018-03-05 Thread Magnus Ihse Bursie


On 2018-03-05 23:38, Phil Race wrote:
>I'm not sure what role the "verification" step we had before ever 
played.
>For all the years we've been "verifying" this, we've detected no 
differences.


I think this is useful in the event that you make some changes and
regenerate the 64 bit sizes but not 32 bit.
That's a good point. I should add a warning message when regenerating 
the checked-in data files that you need to regenerate the files on both 
32 and 64 bit platforms.


For example this old bug similarly reported a breakage on Solaris ..
https://bugs.openjdk.java.net/browse/JDK-6804680

... back in the day when we only had that checked in and the ones used 
for Linux
were being generated on the fly. My reading of the history here is 
sizes.32 and sizes.64
were added to support cross-compilation, and the verification step 
meant that  all

architectures would get checked in some build or other.


I'd rather say that it was at this time that we stopped run-time 
generation of the X11 size data. The "verification" step was there more 
like a comfort thing for the build team, since we was too conservative.



The clean up of removing the solaris specific seems like it could have 
been
done a long time ago .. I am not sure why was ever only this one case 
there.

I'd have to dig back a very long way.

I do agree we do not need to support 32 + 64 bit concurrently, that went
away when 64 bit solaris overlay on 32 bit was dropped.


Thank you.

Updated webrev with warning message about updating for all platforms:

http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.03

(Only UpdateX11Wrappers.gmk has changed)

/Magnus



-phil.

On 03/02/2018 07:23 AM, Erik Joelsson wrote:
Adding 2d-dev in the hopes of getting some input from component team. 
Seems like they should be aware of us removing the support for 
multiple data models.


Looks like you left a debug message at line 40 in GensrcX11Wrappers.gmk.

/Erik

On 2018-03-02 03:00, Magnus Ihse Bursie wrote:

On 2018-03-02 00:02, Erik Joelsson wrote:

Hello,

In xlibtypes.txt comment, should it be sizes-64.txt?


Yes, good catch.



Generating both 32 and 64 seems a bit outdated at this point. 
Surely this is a remnant of bundling 64 and 32 bit together on 
Solaris in the past? Perhaps someone in 2d can answer this? Would 
be nice to be able to clean up that part as well if possible.
Yes, you are right. We should clean up that as well. I was just too 
conservative. I've now actually checked what happens when you 
generate both 32 and 64 bit versions, and the result is that instead 
of code like:

    public static int getSize() { return 96; }
we get code like this:
    public static int getSize() { return ((XlibWrapper.dataModel == 
32)?(80):(96)); }


Since we do no longer support multiple data models for the same 
build, this is just unnecessary. In fact, that leads to an even 
better cleanup, since we will always need just a single input file.


I also wrapped the tool calls in ExecuteWithLog.

Updated webrev:
http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.02 



/Magnus









RFR: JDK-8199052 Configure broken on aarch64

2018-03-05 Thread Magnus Ihse Bursie

From the bug report:

It seems that JDK-8198724 "Refactor FLAGS handling in configure" broke 
the aarch64 build.


The new configure scheme tries to pass the argument "-m64" to the gcc 
compiler, but that is an x86-only flag.


configure:35228: checking whether the C compiler works
configure:35250: /usr/bin/gcc -m64 -m64 conftest.c >&5
gcc: error: unrecognized command line option '-m64'
gcc: error: unrecognized command line option '-m64'
[...]

The flags handling is currently in flux. The entire "MACHINE_FLAG" 
handling is not optimal. But we can't have aarch64 hanging around broken 
awaiting a proper refactoring, so here's a not-ideal-but-working patch.


Bug: https://bugs.openjdk.java.net/browse/JDK-8199052
Patch inline:
diff --git a/make/autoconf/flags.m4 b/make/autoconf/flags.m4
--- a/make/autoconf/flags.m4
+++ b/make/autoconf/flags.m4
@@ -236,7 +236,9 @@
   if test "x$TOOLCHAIN_TYPE" = xxlc; then
 MACHINE_FLAG="-q${OPENJDK_TARGET_CPU_BITS}"
   elif test "x$TOOLCHAIN_TYPE" != xmicrosoft; then
-    MACHINE_FLAG="-m${OPENJDK_TARGET_CPU_BITS}"
+    if test "x$OPENJDK_TARGET_CPU" != xaarch64; then
+  MACHINE_FLAG="-m${OPENJDK_TARGET_CPU_BITS}"
+    fi
   fi

   # FIXME: global flags are not used yet...

/Magnus


Re: [OpenJDK 2D-Dev] RFR: JDK-8198844 Clean up GensrcX11Wrappers

2018-03-05 Thread Phil Race
+ $(ECHO) needs to be updated for both 32 and 64 bit platforms. You have 
now needs -> need. Could we make this easier by trying to generate the 
32 bit native binary as well if running on 64 bits ? Looks like this was 
not normally done on Linux but if you have 32 bit compiler + library 
support it should work. Or do you consider it easy enough to just kick 
off a 32 bit build to get the same ? -phil.



On 03/05/2018 03:30 PM, Magnus Ihse Bursie wrote:


On 2018-03-05 23:38, Phil Race wrote:
>I'm not sure what role the "verification" step we had before ever 
played.
>For all the years we've been "verifying" this, we've detected no 
differences.


I think this is useful in the event that you make some changes and
regenerate the 64 bit sizes but not 32 bit.
That's a good point. I should add a warning message when regenerating 
the checked-in data files that you need to regenerate the files on 
both 32 and 64 bit platforms.


For example this old bug similarly reported a breakage on Solaris ..
https://bugs.openjdk.java.net/browse/JDK-6804680

... back in the day when we only had that checked in and the ones 
used for Linux
were being generated on the fly. My reading of the history here is 
sizes.32 and sizes.64
were added to support cross-compilation, and the verification step 
meant that  all

architectures would get checked in some build or other.


I'd rather say that it was at this time that we stopped run-time 
generation of the X11 size data. The "verification" step was there 
more like a comfort thing for the build team, since we was too 
conservative.



The clean up of removing the solaris specific seems like it could 
have been
done a long time ago .. I am not sure why was ever only this one case 
there.

I'd have to dig back a very long way.

I do agree we do not need to support 32 + 64 bit concurrently, that went
away when 64 bit solaris overlay on 32 bit was dropped.


Thank you.

Updated webrev with warning message about updating for all platforms:

http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.03 



(Only UpdateX11Wrappers.gmk has changed)

/Magnus



-phil.

On 03/02/2018 07:23 AM, Erik Joelsson wrote:
Adding 2d-dev in the hopes of getting some input from component 
team. Seems like they should be aware of us removing the support for 
multiple data models.


Looks like you left a debug message at line 40 in 
GensrcX11Wrappers.gmk.


/Erik

On 2018-03-02 03:00, Magnus Ihse Bursie wrote:

On 2018-03-02 00:02, Erik Joelsson wrote:

Hello,

In xlibtypes.txt comment, should it be sizes-64.txt?


Yes, good catch.



Generating both 32 and 64 seems a bit outdated at this point. 
Surely this is a remnant of bundling 64 and 32 bit together on 
Solaris in the past? Perhaps someone in 2d can answer this? Would 
be nice to be able to clean up that part as well if possible.
Yes, you are right. We should clean up that as well. I was just too 
conservative. I've now actually checked what happens when you 
generate both 32 and 64 bit versions, and the result is that 
instead of code like:

public static int getSize() { return 96; }
we get code like this:
public static int getSize() { return ((XlibWrapper.dataModel == 
32)?(80):(96)); }


Since we do no longer support multiple data models for the same 
build, this is just unnecessary. In fact, that leads to an even 
better cleanup, since we will always need just a single input file.


I also wrapped the tool calls in ExecuteWithLog.

Updated webrev:
http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.02 



/Magnus











Re: RFR: JDK-8199052 Configure broken on aarch64

2018-03-05 Thread Erik Joelsson

Looks good.

/Erik


On 2018-03-05 15:45, Magnus Ihse Bursie wrote:

From the bug report:

It seems that JDK-8198724 "Refactor FLAGS handling in configure" broke 
the aarch64 build.


The new configure scheme tries to pass the argument "-m64" to the gcc 
compiler, but that is an x86-only flag.


configure:35228: checking whether the C compiler works
configure:35250: /usr/bin/gcc -m64 -m64 conftest.c >&5
gcc: error: unrecognized command line option '-m64'
gcc: error: unrecognized command line option '-m64'
[...]

The flags handling is currently in flux. The entire "MACHINE_FLAG" 
handling is not optimal. But we can't have aarch64 hanging around 
broken awaiting a proper refactoring, so here's a 
not-ideal-but-working patch.


Bug: https://bugs.openjdk.java.net/browse/JDK-8199052
Patch inline:
diff --git a/make/autoconf/flags.m4 b/make/autoconf/flags.m4
--- a/make/autoconf/flags.m4
+++ b/make/autoconf/flags.m4
@@ -236,7 +236,9 @@
   if test "x$TOOLCHAIN_TYPE" = xxlc; then
 MACHINE_FLAG="-q${OPENJDK_TARGET_CPU_BITS}"
   elif test "x$TOOLCHAIN_TYPE" != xmicrosoft; then
-    MACHINE_FLAG="-m${OPENJDK_TARGET_CPU_BITS}"
+    if test "x$OPENJDK_TARGET_CPU" != xaarch64; then
+  MACHINE_FLAG="-m${OPENJDK_TARGET_CPU_BITS}"
+    fi
   fi

   # FIXME: global flags are not used yet...

/Magnus




Re: [OpenJDK 2D-Dev] RFR: JDK-8198844 Clean up GensrcX11Wrappers

2018-03-05 Thread Magnus Ihse Bursie



On 2018-03-06 00:53, Phil Race wrote:
+ $(ECHO) needs to be updated for both 32 and 64 bit platforms. You 
have now needs -> need.
Fixed typo. Thanks! (I really hate how I suck at those English plural 
s'es :-&)
Could we make this easier by trying to generate the 32 bit native 
binary as well if running on 64 bits ? Looks like this was not 
normally done on Linux but if you have 32 bit compiler + library 
support it should work. Or do you consider it easy enough to just kick 
off a 32 bit build to get the same ?
Yes, I consider that easy enough, given how unlikely it is that this 
needs ever be done.


/Magnus


-phil.

On 03/05/2018 03:30 PM, Magnus Ihse Bursie wrote:


On 2018-03-05 23:38, Phil Race wrote:
>I'm not sure what role the "verification" step we had before ever 
played.
>For all the years we've been "verifying" this, we've detected no 
differences.


I think this is useful in the event that you make some changes and
regenerate the 64 bit sizes but not 32 bit.
That's a good point. I should add a warning message when regenerating 
the checked-in data files that you need to regenerate the files on 
both 32 and 64 bit platforms.


For example this old bug similarly reported a breakage on Solaris ..
https://bugs.openjdk.java.net/browse/JDK-6804680

... back in the day when we only had that checked in and the ones 
used for Linux
were being generated on the fly. My reading of the history here is 
sizes.32 and sizes.64
were added to support cross-compilation, and the verification step 
meant that  all

architectures would get checked in some build or other.


I'd rather say that it was at this time that we stopped run-time 
generation of the X11 size data. The "verification" step was there 
more like a comfort thing for the build team, since we was too 
conservative.



The clean up of removing the solaris specific seems like it could 
have been
done a long time ago .. I am not sure why was ever only this one 
case there.

I'd have to dig back a very long way.

I do agree we do not need to support 32 + 64 bit concurrently, that 
went

away when 64 bit solaris overlay on 32 bit was dropped.


Thank you.

Updated webrev with warning message about updating for all platforms:

http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.03 



(Only UpdateX11Wrappers.gmk has changed)

/Magnus



-phil.

On 03/02/2018 07:23 AM, Erik Joelsson wrote:
Adding 2d-dev in the hopes of getting some input from component 
team. Seems like they should be aware of us removing the support 
for multiple data models.


Looks like you left a debug message at line 40 in 
GensrcX11Wrappers.gmk.


/Erik

On 2018-03-02 03:00, Magnus Ihse Bursie wrote:

On 2018-03-02 00:02, Erik Joelsson wrote:

Hello,

In xlibtypes.txt comment, should it be sizes-64.txt?


Yes, good catch.



Generating both 32 and 64 seems a bit outdated at this point. 
Surely this is a remnant of bundling 64 and 32 bit together on 
Solaris in the past? Perhaps someone in 2d can answer this? Would 
be nice to be able to clean up that part as well if possible.
Yes, you are right. We should clean up that as well. I was just 
too conservative. I've now actually checked what happens when you 
generate both 32 and 64 bit versions, and the result is that 
instead of code like:

    public static int getSize() { return 96; }
we get code like this:
    public static int getSize() { return ((XlibWrapper.dataModel 
== 32)?(80):(96)); }


Since we do no longer support multiple data models for the same 
build, this is just unnecessary. In fact, that leads to an even 
better cleanup, since we will always need just a single input file.


I also wrapped the tool calls in ExecuteWithLog.

Updated webrev:
http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.02 



/Magnus













Re: [OpenJDK 2D-Dev] RFR: JDK-8198844 Clean up GensrcX11Wrappers

2018-03-05 Thread Phil Race

It isn't "unlikely" .. it is just "relatively rare".

-phil.

On 03/05/2018 04:00 PM, Magnus Ihse Bursie wrote:



On 2018-03-06 00:53, Phil Race wrote:
+ $(ECHO) needs to be updated for both 32 and 64 bit platforms. You 
have now needs -> need.
Fixed typo. Thanks! (I really hate how I suck at those English plural 
s'es :-&)
Could we make this easier by trying to generate the 32 bit native 
binary as well if running on 64 bits ? Looks like this was not 
normally done on Linux but if you have 32 bit compiler + library 
support it should work. Or do you consider it easy enough to just 
kick off a 32 bit build to get the same ?
Yes, I consider that easy enough, given how unlikely it is that this 
needs ever be done.


/Magnus


-phil.

On 03/05/2018 03:30 PM, Magnus Ihse Bursie wrote:


On 2018-03-05 23:38, Phil Race wrote:
>I'm not sure what role the "verification" step we had before ever 
played.
>For all the years we've been "verifying" this, we've detected no 
differences.


I think this is useful in the event that you make some changes and
regenerate the 64 bit sizes but not 32 bit.
That's a good point. I should add a warning message when 
regenerating the checked-in data files that you need to regenerate 
the files on both 32 and 64 bit platforms.


For example this old bug similarly reported a breakage on Solaris ..
https://bugs.openjdk.java.net/browse/JDK-6804680

... back in the day when we only had that checked in and the ones 
used for Linux
were being generated on the fly. My reading of the history here is 
sizes.32 and sizes.64
were added to support cross-compilation, and the verification step 
meant that  all

architectures would get checked in some build or other.


I'd rather say that it was at this time that we stopped run-time 
generation of the X11 size data. The "verification" step was there 
more like a comfort thing for the build team, since we was too 
conservative.



The clean up of removing the solaris specific seems like it could 
have been
done a long time ago .. I am not sure why was ever only this one 
case there.

I'd have to dig back a very long way.

I do agree we do not need to support 32 + 64 bit concurrently, that 
went

away when 64 bit solaris overlay on 32 bit was dropped.


Thank you.

Updated webrev with warning message about updating for all platforms:

http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.03 



(Only UpdateX11Wrappers.gmk has changed)

/Magnus



-phil.

On 03/02/2018 07:23 AM, Erik Joelsson wrote:
Adding 2d-dev in the hopes of getting some input from component 
team. Seems like they should be aware of us removing the support 
for multiple data models.


Looks like you left a debug message at line 40 in 
GensrcX11Wrappers.gmk.


/Erik

On 2018-03-02 03:00, Magnus Ihse Bursie wrote:

On 2018-03-02 00:02, Erik Joelsson wrote:

Hello,

In xlibtypes.txt comment, should it be sizes-64.txt?


Yes, good catch.



Generating both 32 and 64 seems a bit outdated at this point. 
Surely this is a remnant of bundling 64 and 32 bit together on 
Solaris in the past? Perhaps someone in 2d can answer this? 
Would be nice to be able to clean up that part as well if possible.
Yes, you are right. We should clean up that as well. I was just 
too conservative. I've now actually checked what happens when you 
generate both 32 and 64 bit versions, and the result is that 
instead of code like:

public static int getSize() { return 96; }
we get code like this:
public static int getSize() { return ((XlibWrapper.dataModel 
== 32)?(80):(96)); }


Since we do no longer support multiple data models for the same 
build, this is just unnecessary. In fact, that leads to an even 
better cleanup, since we will always need just a single input file.


I also wrapped the tool calls in ExecuteWithLog.

Updated webrev:
http://cr.openjdk.java.net/~ihse/JDK-8198844-clean-up-GensrcX11Wrappers/webrev.02 



/Magnus