Re: JDK 11 hotspot build fails with "Undefined symbol" on AIX
Hi Thomas, Thank you. It worked! One question, hotspot source in jdk/hs and jdk/jdk won't be in sync? - Bhaktavatsal Reddy -"Thomas Stüfe"wrote: - To: "hotspot-runtime-...@openjdk.java.net" , Bhaktavatsal R Maram From: "Thomas Stüfe" Date: 04/11/2018 09:21AM Cc: build-dev , David Holmes Subject: Re: JDK 11 hotspot build fails with "Undefined symbol" on AIX Hi, looks like https://bugs.openjdk.java.net/browse/JDK-8200302. Which has been fixed already in jdk-hs. Please build from there, the current tip. If you build jdk/jdk you need to apply the patch for that fix manually. Best Regards, Thomas On Tue, Apr 10, 2018 at 11:50 PM, David Holmes wrote: Hi, On 11/04/2018 1:44 AM, Bhaktavatsal R Maram wrote: Based on the attachment content (see below) this seems a hotspot issue for the AIX/PPC folk to fix. So moving over to hotspot-runtime-dev. David -- ld: 0711-318 ERROR: Undefined symbols were found. The following symbols are in error: SymbolInpndx TY CL Source-File(Object-File) OR Import-File{Shared-object} RLD: Address Section Rld-type Referencing Symbol -- .__ct__5frameFPlPUc [2169] ER PR /home/bhamaram/openJDK/jdk11/src/hotspot/os_cpu/aix_ppc/thread_aix_ppc.cpp(/home/bhamaram/openJDK/jdk11/build/aix-ppc64-normal-server-release/hotspot/variant-server/libjvm/objs/thread_aix_ppc.o) 0028 .textR_RBR[1444] .pd_last_frame__10JavaThreadFv 0044 .textR_RBR[1444] .pd_last_frame__10JavaThreadFv ER: The return code is 8.
Re: slowproduct build
On 4/10/18 2:21 PM, Magnus Ihse Bursie wrote: On 2018-04-10 23:08, Ioi Lam wrote: Yes that’s what I want. Yesterday I was using gdb to step into the interpreter generation code to see what’s generated for a particular routine for MethodHandles. The debug build contains a LOT of runtime verification code in the generated code and I couldn’t figure out what’s happening. The product build just generates 10 instructions. So you want to have a way to force -O0 for all compiled files? Something like "bash configure --with-debug-level=release --with-optimization=none", or possibly "make OPTIMIZATION=NONE"? Hi Magnus, I like the --with-optimization=none flag. This doesn't seem to exist yet. Any plans to add it? The way I would use it is: bash configure --with-debug-level=product --with-optimization=none Thanks - Ioi Or are you happy with the optimization level of a slowdebug build, and only want to adjust the value of PRODUCT and ASSERT for hotspot to match what's done for a release build? Something like "bash configure --enable-hotspot-product-build"? /Magnus Thanks Ioi On Apr 10, 2018, at 1:47 PM, Thomas Stüfe> wrote: If I understand Ioi correctly, he wants a build with PRODUCT and !ASSERT but with debug symbols and no optimizations? So, no assertions and all switches with product defaults? I can see that this could make sense in certain scenarios. ..Thomas On Tue, Apr 10, 2018 at 10:27 PM, Magnus Ihse Bursie > wrote: On 2018-04-10 02:00, Ioi Lam wrote: Sometimes I want to debug the product build (I can't bother with turning off all the trueInDebug options in the hotspot globals.hpp). The only way that I have found to do this is: configure --with-native-debug-symbols=internal mv spec.gmk spec.gmk.old cat spec.gmk | sed -e 's/[-]O[0-9s]/-O0/g' > spec.gmk Is there (or should there be) a more elegant way to do it, like "configure --with-debug-level=slowproduct" :-) I'm not entirely sure of what you want to achieve. As I interepret your snippet above, you want no optimization and internal debug symbols..? How is that debugging a "product" build? /Magnus Thanks - Ioi
Re: JDK 11 hotspot build fails with "Undefined symbol" on AIX
Hi, looks like https://bugs.openjdk.java.net/browse/JDK-8200302. Which has been fixed already in jdk-hs. Please build from there, the current tip. If you build jdk/jdk you need to apply the patch for that fix manually. Best Regards, Thomas On Tue, Apr 10, 2018 at 11:50 PM, David Holmeswrote: > Hi, > > On 11/04/2018 1:44 AM, Bhaktavatsal R Maram wrote: > > > > Based on the attachment content (see below) this seems a hotspot issue for > the AIX/PPC folk to fix. So moving over to hotspot-runtime-dev. > > David > -- > > ld: 0711-318 ERROR: Undefined symbols were found. > The following symbols are in error: > SymbolInpndx TY CL Source-File(Object-File) OR > Import-File{Shared-object} > RLD: Address Section Rld-type Referencing > Symbol > > > -- > .__ct__5frameFPlPUc [2169] ER PR /home/bhamaram/openJDK/jdk11/s > rc/hotspot/os_cpu/aix_ppc/thread_aix_ppc.cpp(/home/bhamaram/ > openJDK/jdk11/build/aix-ppc64-normal-server-release/hotspot/ > variant-server/libjvm/objs/thread_aix_ppc.o) >0028 .textR_RBR[1444] > .pd_last_frame__10JavaThreadFv >0044 .textR_RBR[1444] > .pd_last_frame__10JavaThreadFv > ER: The return code is 8. > > >
Re: JDK 11 hotspot build fails with "Undefined symbol" on AIX
Hi David, Thank you. I had given some more details in my mail. But, for some reason, text in my mail was missing. Anyway, missing symbol is frame::frame(long*,unsigned char*) ld: 0711-317 ERROR: Undefined symbol: .frame::frame(long*,unsigned char*) ld: 0711-344 See the loadmap file /home/bhamaram/openJDK/jdk11/build/aix-ppc64-normal-server-release/hotspot/variant-server/libjvm/objs/libjvm.loadmap for more information. I see that definition of that function is present in src/hotspot/cpu/ppc/frame_ppc.inline.hpp inline frame::frame(intptr_t* sp, address pc) : _sp(sp), _unextended_sp(sp) { find_codeblob_and_set_pc_and_deopt_state(pc); // also sets _fp and adjusts _unextended_sp } Thanks, Bhaktavatsal Reddy -David Holmeswrote: - To: Bhaktavatsal R Maram , "hotspot-runtime-...@openjdk.java.net" From: David Holmes Date: 04/11/2018 03:21AM Cc: build-dev@openjdk.java.net Subject: Re: JDK 11 hotspot build fails with "Undefined symbol" on AIX Hi, On 11/04/2018 1:44 AM, Bhaktavatsal R Maram wrote: Based on the attachment content (see below) this seems a hotspot issue for the AIX/PPC folk to fix. So moving over to hotspot-runtime-dev. David -- ld: 0711-318 ERROR: Undefined symbols were found. The following symbols are in error: SymbolInpndx TY CL Source-File(Object-File) OR Import-File{Shared-object} RLD: Address Section Rld-type Referencing Symbol -- .__ct__5frameFPlPUc [2169] ER PR /home/bhamaram/openJDK/jdk11/src/hotspot/os_cpu/aix_ppc/thread_aix_ppc.cpp(/home/bhamaram/openJDK/jdk11/build/aix-ppc64-normal-server-release/hotspot/variant-server/libjvm/objs/thread_aix_ppc.o) 0028 .textR_RBR[1444] .pd_last_frame__10JavaThreadFv 0044 .textR_RBR[1444] .pd_last_frame__10JavaThreadFv ER: The return code is 8.
Re: JDK 11 hotspot build fails with "Undefined symbol" on AIX
Hi David, Thank you. I had given some more details in my mail. But, for some reason, text in my mail was missing. Anyway, missing symbol is frame::frame(long*,unsigned char*) ld: 0711-317 ERROR: Undefined symbol: .frame::frame(long*,unsigned char*) ld: 0711-344 See the loadmap file /home/bhamaram/openJDK/jdk11/build/aix-ppc64-normal-server-release/hotspot/variant-server/libjvm/objs/libjvm.loadmap for more information. I see that definition of that function is present in src/hotspot/cpu/ppc/frame_ppc.inline.hpp inline frame::frame(intptr_t* sp, address pc) : _sp(sp), _unextended_sp(sp) { find_codeblob_and_set_pc_and_deopt_state(pc); // also sets _fp and adjusts _unextended_sp } Thanks, Bhaktavatsal Reddy -David Holmeswrote: - To: Bhaktavatsal R Maram , "hotspot-runtime-...@openjdk.java.net" From: David Holmes Date: 04/11/2018 03:21AM Cc: build-dev@openjdk.java.net Subject: Re: JDK 11 hotspot build fails with "Undefined symbol" on AIX Hi, On 11/04/2018 1:44 AM, Bhaktavatsal R Maram wrote: Based on the attachment content (see below) this seems a hotspot issue for the AIX/PPC folk to fix. So moving over to hotspot-runtime-dev. David -- ld: 0711-318 ERROR: Undefined symbols were found. The following symbols are in error: SymbolInpndx TY CL Source-File(Object-File) OR Import-File{Shared-object} RLD: Address Section Rld-type Referencing Symbol -- .__ct__5frameFPlPUc [2169] ER PR /home/bhamaram/openJDK/jdk11/src/hotspot/os_cpu/aix_ppc/thread_aix_ppc.cpp(/home/bhamaram/openJDK/jdk11/build/aix-ppc64-normal-server-release/hotspot/variant-server/libjvm/objs/thread_aix_ppc.o) 0028 .textR_RBR[1444] .pd_last_frame__10JavaThreadFv 0044 .textR_RBR[1444] .pd_last_frame__10JavaThreadFv ER: The return code is 8.
Re: RFR: 8196516: libfontmanager must be built with LDFLAGS allowing unresolved symbols
+1 On 10/04/2018 13:57, Magnus Ihse Bursie wrote: On 2018-04-10 13:25, Severin Gehwolf wrote: Hi Erik, On Mon, 2018-04-09 at 09:20 -0700, Erik Joelsson wrote: Hello Severin, I'm ok with this solution for now. Thanks for the review! Could you please reduce the indentation on line 652. In the build system we like 4 spaces for continuation indent [1] Done. New webrev at: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.02 It's not nice, but there's no other way to do it right now. Approved. (I'm working on getting to the point were I can address this in a better way.) /Magnus Could someone from awt-dev have a look at this too? Thanks! Cheers, Severin /Erik [1] http://openjdk.java.net/groups/build/doc/code-conventions.html On 2018-04-09 06:39, Severin Gehwolf wrote: Hi, Could somebody please review this build fix for libfontmanager.so. The issue for us is that with some LDFLAGS the build breaks as described in bug JDK-8196218. However, we cannot link to a providing library at build-time since we don't know which one it should be: libawt_headless or libawt_xawt. That has to happen at runtime. The proposed fix filters out relevant linker flags when libfontmanager is being built. More details are in the bug. Bug: https://bugs.openjdk.java.net/browse/JDK-8196516 webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.01/ Testing: I've run this through submit[1] and got the following results. SwingSet2 works fine for me on F27. I'm currently running some more tests on RHEL 7. - Mach5 mach5-one-sgehwolf-JDK-8196516-20180409-1036-17877: Builds PASSED. Testing FAILURE. 0 Failed Tests Mach5 Tasks Results Summary NA: 0 UNABLE_TO_RUN: 0 EXECUTED_WITH_FAILURE: 0 KILLED: 0 PASSED: 82 FAILED: 1 Test 1 Failed tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-windows-x64- debug-31 SetupFailedException in setup...profile run-test-prebuilt' , return value: 10 Not sure what this test failure means. Could somebody at Oracle shed some light on this? Thanks, Severin -- Best regards, Sergey.
Re: RFR: 8196516: libfontmanager must be built with LDFLAGS allowing unresolved symbols
LIBS_aix := -lawt_headless, I guess that AIX team should have a similar fix. On 10/04/2018 09:34, Erik Joelsson wrote: Looks good. Thanks! /Erik On 2018-04-10 04:25, Severin Gehwolf wrote: Hi Erik, On Mon, 2018-04-09 at 09:20 -0700, Erik Joelsson wrote: Hello Severin, I'm ok with this solution for now. Thanks for the review! Could you please reduce the indentation on line 652. In the build system we like 4 spaces for continuation indent [1] Done. New webrev at: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.02 Could someone from awt-dev have a look at this too? Thanks! Cheers, Severin /Erik [1] http://openjdk.java.net/groups/build/doc/code-conventions.html On 2018-04-09 06:39, Severin Gehwolf wrote: Hi, Could somebody please review this build fix for libfontmanager.so. The issue for us is that with some LDFLAGS the build breaks as described in bug JDK-8196218. However, we cannot link to a providing library at build-time since we don't know which one it should be: libawt_headless or libawt_xawt. That has to happen at runtime. The proposed fix filters out relevant linker flags when libfontmanager is being built. More details are in the bug. Bug: https://bugs.openjdk.java.net/browse/JDK-8196516 webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.01/ Testing: I've run this through submit[1] and got the following results. SwingSet2 works fine for me on F27. I'm currently running some more tests on RHEL 7. - Mach5 mach5-one-sgehwolf-JDK-8196516-20180409-1036-17877: Builds PASSED. Testing FAILURE. 0 Failed Tests Mach5 Tasks Results Summary NA: 0 UNABLE_TO_RUN: 0 EXECUTED_WITH_FAILURE: 0 KILLED: 0 PASSED: 82 FAILED: 1 Test 1 Failed tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-windows-x64- debug-31 SetupFailedException in setup...profile run-test-prebuilt' , return value: 10 Not sure what this test failure means. Could somebody at Oracle shed some light on this? Thanks, Severin -- Best regards, Sergey.
Re: JDK 11 hotspot build fails with "Undefined symbol" on AIX
Hi, On 11/04/2018 1:44 AM, Bhaktavatsal R Maram wrote: Based on the attachment content (see below) this seems a hotspot issue for the AIX/PPC folk to fix. So moving over to hotspot-runtime-dev. David -- ld: 0711-318 ERROR: Undefined symbols were found. The following symbols are in error: SymbolInpndx TY CL Source-File(Object-File) OR Import-File{Shared-object} RLD: Address Section Rld-type Referencing Symbol -- .__ct__5frameFPlPUc [2169] ER PR /home/bhamaram/openJDK/jdk11/src/hotspot/os_cpu/aix_ppc/thread_aix_ppc.cpp(/home/bhamaram/openJDK/jdk11/build/aix-ppc64-normal-server-release/hotspot/variant-server/libjvm/objs/thread_aix_ppc.o) 0028 .textR_RBR[1444] .pd_last_frame__10JavaThreadFv 0044 .textR_RBR[1444] .pd_last_frame__10JavaThreadFv ER: The return code is 8.
Re: RFR: JDK-8201320 Feature request: Allow PrintFailureReports to be turned off
Ah it was run without spec before. Looks good then. /Erik On 2018-04-10 14:31, Magnus Ihse Bursie wrote: On 2018-04-10 23:24, Erik Joelsson wrote: Hello, Nice feature! Init.gmk: 229 were -> was Fixed without new webrev. Otherwise looks good. Out of curiosity, was there a reason to move the log parsing macros outside of has-spec block? It doesn't look like you changed where you call these macros from. Yes, there was, and yes, I have changed it. :) Right below my typo :-) I re-call ParseLogLevel if I get a value in DEFAULT_LOG from the spec.gmk. Unfortunately, this means that I now need to call ParseLogLevel both without a spec and with a spec, which I have hitherto treated as completely different scenarios in Init.gmk/InitSupport.gmk. I have verified that the code is suitable to run in the new situation of being with a spec.gmk as well. There's a fix for COMMA that's not needed when running with a spec, but it doesn't harm either so it's okay. /Magnus /Erik On 2018-04-10 13:54, Magnus Ihse Bursie wrote: From the bug report: "The compile errors you get from HotSpot are quite large, and usually don't get entirely printed in PrintFailureReports. This has the effect that the goto mode to find the compilation error is to scroll past PrintFailureReports to get to the complete error message. It would be nice if there was a way to turn off this feature from the command line." I've solved this by adding a new LOG option, "report", which takes an argument: "report=default", "report=none" or "report=all". As usual, this can be combined with other LOG options, e.g. "LOG=info,report=all". The "default" value is what it always been, giving you the first screenful of lines of each failure. "none" is what Stefan requested, and "all" means that there is no truncating, so in a sense, it's another way of giving Stefan what he wants. :-) To make this usable in practice, I also implemented a feature I've been thinking about a long time, but never gotten around to. And that is to be able to set a default value for LOG in configure, similar to how we can set default values for JOBS or the default make target. The new flag is "--with-log=", e.g. "--with-log=info,report=none". If a LOG= value is given on the command line, it overrides the default value provided to configure. Bug: https://bugs.openjdk.java.net/browse/JDK-8201320 WebRev: http://cr.openjdk.java.net/~ihse/JDK-8201320-allow-disabling-of-exit-reports/webrev.01 /Magnus
Re: RFR: JDK-8201320 Feature request: Allow PrintFailureReports to be turned off
On 2018-04-10 23:24, Erik Joelsson wrote: Hello, Nice feature! Init.gmk: 229 were -> was Fixed without new webrev. Otherwise looks good. Out of curiosity, was there a reason to move the log parsing macros outside of has-spec block? It doesn't look like you changed where you call these macros from. Yes, there was, and yes, I have changed it. :) Right below my typo :-) I re-call ParseLogLevel if I get a value in DEFAULT_LOG from the spec.gmk. Unfortunately, this means that I now need to call ParseLogLevel both without a spec and with a spec, which I have hitherto treated as completely different scenarios in Init.gmk/InitSupport.gmk. I have verified that the code is suitable to run in the new situation of being with a spec.gmk as well. There's a fix for COMMA that's not needed when running with a spec, but it doesn't harm either so it's okay. /Magnus /Erik On 2018-04-10 13:54, Magnus Ihse Bursie wrote: From the bug report: "The compile errors you get from HotSpot are quite large, and usually don't get entirely printed in PrintFailureReports. This has the effect that the goto mode to find the compilation error is to scroll past PrintFailureReports to get to the complete error message. It would be nice if there was a way to turn off this feature from the command line." I've solved this by adding a new LOG option, "report", which takes an argument: "report=default", "report=none" or "report=all". As usual, this can be combined with other LOG options, e.g. "LOG=info,report=all". The "default" value is what it always been, giving you the first screenful of lines of each failure. "none" is what Stefan requested, and "all" means that there is no truncating, so in a sense, it's another way of giving Stefan what he wants. :-) To make this usable in practice, I also implemented a feature I've been thinking about a long time, but never gotten around to. And that is to be able to set a default value for LOG in configure, similar to how we can set default values for JOBS or the default make target. The new flag is "--with-log=", e.g. "--with-log=info,report=none". If a LOG= value is given on the command line, it overrides the default value provided to configure. Bug: https://bugs.openjdk.java.net/browse/JDK-8201320 WebRev: http://cr.openjdk.java.net/~ihse/JDK-8201320-allow-disabling-of-exit-reports/webrev.01 /Magnus
Re: RFR: JDK-8201320 Feature request: Allow PrintFailureReports to be turned off
Hello, Nice feature! Init.gmk: 229 were -> was Otherwise looks good. Out of curiosity, was there a reason to move the log parsing macros outside of has-spec block? It doesn't look like you changed where you call these macros from. /Erik On 2018-04-10 13:54, Magnus Ihse Bursie wrote: From the bug report: "The compile errors you get from HotSpot are quite large, and usually don't get entirely printed in PrintFailureReports. This has the effect that the goto mode to find the compilation error is to scroll past PrintFailureReports to get to the complete error message. It would be nice if there was a way to turn off this feature from the command line." I've solved this by adding a new LOG option, "report", which takes an argument: "report=default", "report=none" or "report=all". As usual, this can be combined with other LOG options, e.g. "LOG=info,report=all". The "default" value is what it always been, giving you the first screenful of lines of each failure. "none" is what Stefan requested, and "all" means that there is no truncating, so in a sense, it's another way of giving Stefan what he wants. :-) To make this usable in practice, I also implemented a feature I've been thinking about a long time, but never gotten around to. And that is to be able to set a default value for LOG in configure, similar to how we can set default values for JOBS or the default make target. The new flag is "--with-log=", e.g. "--with-log=info,report=none". If a LOG= value is given on the command line, it overrides the default value provided to configure. Bug: https://bugs.openjdk.java.net/browse/JDK-8201320 WebRev: http://cr.openjdk.java.net/~ihse/JDK-8201320-allow-disabling-of-exit-reports/webrev.01 /Magnus
Re: slowproduct build
On 2018-04-10 23:08, Ioi Lam wrote: Yes that’s what I want. Yesterday I was using gdb to step into the interpreter generation code to see what’s generated for a particular routine for MethodHandles. The debug build contains a LOT of runtime verification code in the generated code and I couldn’t figure out what’s happening. The product build just generates 10 instructions. So you want to have a way to force -O0 for all compiled files? Something like "bash configure --with-debug-level=release --with-optimization=none", or possibly "make OPTIMIZATION=NONE"? Or are you happy with the optimization level of a slowdebug build, and only want to adjust the value of PRODUCT and ASSERT for hotspot to match what's done for a release build? Something like "bash configure --enable-hotspot-product-build"? /Magnus Thanks Ioi On Apr 10, 2018, at 1:47 PM, Thomas Stüfe> wrote: If I understand Ioi correctly, he wants a build with PRODUCT and !ASSERT but with debug symbols and no optimizations? So, no assertions and all switches with product defaults? I can see that this could make sense in certain scenarios. ..Thomas On Tue, Apr 10, 2018 at 10:27 PM, Magnus Ihse Bursie > wrote: On 2018-04-10 02:00, Ioi Lam wrote: Sometimes I want to debug the product build (I can't bother with turning off all the trueInDebug options in the hotspot globals.hpp). The only way that I have found to do this is: configure --with-native-debug-symbols=internal mv spec.gmk spec.gmk.old cat spec.gmk | sed -e 's/[-]O[0-9s]/-O0/g' > spec.gmk Is there (or should there be) a more elegant way to do it, like "configure --with-debug-level=slowproduct" :-) I'm not entirely sure of what you want to achieve. As I interepret your snippet above, you want no optimization and internal debug symbols..? How is that debugging a "product" build? /Magnus Thanks - Ioi
Re: RFR: JDK-8201320 Feature request: Allow PrintFailureReports to be turned off
Hi Magnus, Thanks a lot for implementing this! I tested this with --with-log=info,report=none, and it's exactly what I want in my config script ;) Thanks, StefanK On 2018-04-10 22:54, Magnus Ihse Bursie wrote: From the bug report: "The compile errors you get from HotSpot are quite large, and usually don't get entirely printed in PrintFailureReports. This has the effect that the goto mode to find the compilation error is to scroll past PrintFailureReports to get to the complete error message. It would be nice if there was a way to turn off this feature from the command line." I've solved this by adding a new LOG option, "report", which takes an argument: "report=default", "report=none" or "report=all". As usual, this can be combined with other LOG options, e.g. "LOG=info,report=all". The "default" value is what it always been, giving you the first screenful of lines of each failure. "none" is what Stefan requested, and "all" means that there is no truncating, so in a sense, it's another way of giving Stefan what he wants. :-) To make this usable in practice, I also implemented a feature I've been thinking about a long time, but never gotten around to. And that is to be able to set a default value for LOG in configure, similar to how we can set default values for JOBS or the default make target. The new flag is "--with-log=", e.g. "--with-log=info,report=none". If a LOG= value is given on the command line, it overrides the default value provided to configure. Bug: https://bugs.openjdk.java.net/browse/JDK-8201320 WebRev: http://cr.openjdk.java.net/~ihse/JDK-8201320-allow-disabling-of-exit-reports/webrev.01 /Magnus
Re: slowproduct build
Yes that’s what I want. Yesterday I was using gdb to step into the interpreter generation code to see what’s generated for a particular routine for MethodHandles. The debug build contains a LOT of runtime verification code in the generated code and I couldn’t figure out what’s happening. The product build just generates 10 instructions. Thanks Ioi > On Apr 10, 2018, at 1:47 PM, Thomas Stüfewrote: > > If I understand Ioi correctly, he wants a build with PRODUCT and !ASSERT but > with debug symbols and no optimizations? So, no assertions and all switches > with product defaults? > > I can see that this could make sense in certain scenarios. > > ..Thomas > >> On Tue, Apr 10, 2018 at 10:27 PM, Magnus Ihse Bursie >> wrote: >>> On 2018-04-10 02:00, Ioi Lam wrote: >>> Sometimes I want to debug the product build (I can't bother with turning >>> off all the trueInDebug options in the hotspot globals.hpp). The only way >>> that I have found to do this is: >>> >>> configure --with-native-debug-symbols=internal >>> mv spec.gmk spec.gmk.old >>> cat spec.gmk | sed -e 's/[-]O[0-9s]/-O0/g' > spec.gmk >>> >>> Is there (or should there be) a more elegant way to do it, like "configure >>> --with-debug-level=slowproduct" :-) >> I'm not entirely sure of what you want to achieve. >> >> As I interepret your snippet above, you want no optimization and internal >> debug symbols..? How is that debugging a "product" build? >> >> /Magnus >> >>> >>> Thanks >>> - Ioi >>> >> >
Re: RFR: 8196516: libfontmanager must be built with LDFLAGS allowing unresolved symbols
On 2018-04-10 13:25, Severin Gehwolf wrote: Hi Erik, On Mon, 2018-04-09 at 09:20 -0700, Erik Joelsson wrote: Hello Severin, I'm ok with this solution for now. Thanks for the review! Could you please reduce the indentation on line 652. In the build system we like 4 spaces for continuation indent [1] Done. New webrev at: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.02 It's not nice, but there's no other way to do it right now. Approved. (I'm working on getting to the point were I can address this in a better way.) /Magnus Could someone from awt-dev have a look at this too? Thanks! Cheers, Severin /Erik [1] http://openjdk.java.net/groups/build/doc/code-conventions.html On 2018-04-09 06:39, Severin Gehwolf wrote: Hi, Could somebody please review this build fix for libfontmanager.so. The issue for us is that with some LDFLAGS the build breaks as described in bug JDK-8196218. However, we cannot link to a providing library at build-time since we don't know which one it should be: libawt_headless or libawt_xawt. That has to happen at runtime. The proposed fix filters out relevant linker flags when libfontmanager is being built. More details are in the bug. Bug: https://bugs.openjdk.java.net/browse/JDK-8196516 webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.01/ Testing: I've run this through submit[1] and got the following results. SwingSet2 works fine for me on F27. I'm currently running some more tests on RHEL 7. - Mach5 mach5-one-sgehwolf-JDK-8196516-20180409-1036-17877: Builds PASSED. Testing FAILURE. 0 Failed Tests Mach5 Tasks Results Summary NA: 0 UNABLE_TO_RUN: 0 EXECUTED_WITH_FAILURE: 0 KILLED: 0 PASSED: 82 FAILED: 1 Test 1 Failed tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-windows-x64- debug-31 SetupFailedException in setup...profile run-test-prebuilt' , return value: 10 Not sure what this test failure means. Could somebody at Oracle shed some light on this? Thanks, Severin
RFR: JDK-8201320 Feature request: Allow PrintFailureReports to be turned off
From the bug report: "The compile errors you get from HotSpot are quite large, and usually don't get entirely printed in PrintFailureReports. This has the effect that the goto mode to find the compilation error is to scroll past PrintFailureReports to get to the complete error message. It would be nice if there was a way to turn off this feature from the command line." I've solved this by adding a new LOG option, "report", which takes an argument: "report=default", "report=none" or "report=all". As usual, this can be combined with other LOG options, e.g. "LOG=info,report=all". The "default" value is what it always been, giving you the first screenful of lines of each failure. "none" is what Stefan requested, and "all" means that there is no truncating, so in a sense, it's another way of giving Stefan what he wants. :-) To make this usable in practice, I also implemented a feature I've been thinking about a long time, but never gotten around to. And that is to be able to set a default value for LOG in configure, similar to how we can set default values for JOBS or the default make target. The new flag is "--with-log=", e.g. "--with-log=info,report=none". If a LOG= value is given on the command line, it overrides the default value provided to configure. Bug: https://bugs.openjdk.java.net/browse/JDK-8201320 WebRev: http://cr.openjdk.java.net/~ihse/JDK-8201320-allow-disabling-of-exit-reports/webrev.01 /Magnus
Re: slowproduct build
On 2018-04-10 02:00, Ioi Lam wrote: Sometimes I want to debug the product build (I can't bother with turning off all the trueInDebug options in the hotspot globals.hpp). The only way that I have found to do this is: configure --with-native-debug-symbols=internal mv spec.gmk spec.gmk.old cat spec.gmk | sed -e 's/[-]O[0-9s]/-O0/g' > spec.gmk Is there (or should there be) a more elegant way to do it, like "configure --with-debug-level=slowproduct" :-) I'm not entirely sure of what you want to achieve. As I interepret your snippet above, you want no optimization and internal debug symbols..? How is that debugging a "product" build? /Magnus Thanks - Ioi
Re: RFR: 8201360: Zero fails to build on linux-sparc after 8201236
Hi Magnus! On 04/10/2018 10:01 PM, Magnus Ihse Bursie wrote: The regression you noted was not caused by JDK-8201236, but by JDK-8198862 "Stop doing funky compilation stuff for dtrace". In fact, JDK-8201236 (which is pushed to jdk/jdk but not yet integrated into jdk/hs, apparently) will once again remove EXTRA_FILES from the SetupNativeCompilation, making zero work again. Ok, so I will update the bug report title accordingly. So if you just wait until JDK-8201236 moves into jdk/hs, this will be fixed. Otherwise, you're just creating a merge conflict for the integrator. :( Ah, even better. I don't mind waiting then and in the meantime continue investigating on the other SPARC stuff. Thanks, Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaub...@debian.org `. `' Freie Universitaet Berlin - glaub...@physik.fu-berlin.de `-GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
Re: RFR: 8201360: Zero fails to build on linux-sparc after 8201236
On 2018-04-10 20:22, John Paul Adrian Glaubitz wrote: Hi! Please review this minor change which fixes the Zero build on linux-sparc which got broken after "JDK-8201236 Straighten out dtrace build logic". While I agree that the fix solves your problems, maybe you could delay pushing this for a while? The regression you noted was not caused by JDK-8201236, but by JDK-8198862 "Stop doing funky compilation stuff for dtrace". In fact, JDK-8201236 (which is pushed to jdk/jdk but not yet integrated into jdk/hs, apparently) will once again remove EXTRA_FILES from the SetupNativeCompilation, making zero work again. So if you just wait until JDK-8201236 moves into jdk/hs, this will be fixed. Otherwise, you're just creating a merge conflict for the integrator. :( /Magnus The change affects the Zero build on linux-sparc only since SPARC has its own implementation of memset_with_concurrent_readers() in memset_with_\ concurrent_readers_sparc.cpp which needs to be added to $(EXTRA_FILES) by adding $(BUILD_LIBJVM_EXTRA_FILES) to $(EXTRA_FILES). Thanks, Adrian [1] http://cr.openjdk.java.net/~glaubitz/8201360/webrev.00/
Re: 8201226 missing JNIEXPORT / JNICALL at some places in function declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at some places in function declarations/implementations
Hi Matthias, On 10/04/2018 11:14, Baesken, Matthias wrote: Hello, I had to do another small adjustment to make jimage.hpp/cpp match. Please review : http://cr.openjdk.java.net/~mbaesken/webrevs/8201226.2/ JIMAGE_FindResource doesn't have JNICALL modifier now, does it? I've successfully built 32 bit Windows with your patch. Please remove JNIEXPORT from main(): src/java.base/share/native/launcher/main.c src/jdk.pack/share/native/unpack200/main.cpp With the latest webrev I could finallybuildjdk/jdk successfully on both win32bit and win64 bit . Thanks again to Alexey to provide the incorporated patch . You can reference both yourself and me as Contributed-by: mbaesken, aivanov when pushing the changeset if you don't mind. Regards, Alexey Best regards, Matthias -Original Message- From: Alexey Ivanov [mailto:alexey.iva...@oracle.com] Sent: Montag, 9. April 2018 17:14 To: Baesken, Matthias; Magnus Ihse Bursie Cc: build-dev ; Doerr, Martin Subject: Re: 8201226 missing JNIEXPORT / JNICALL at some places in function declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at some places in function declarations/implementations Hi Matthias, On 09/04/2018 15:38, Baesken, Matthias wrote: Hi Alexey,thanks for the diff provided by you, and for the explanations . I created a second webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8201226.1/ - it adds the diff provided by you(hope that’s fine with you) Yes, that's fine with me. There could be only one author ;) -changes 2 launcherssrc/java.base/share/native/launcher/main.cand src/jdk.pack/share/native/unpack200/main.cppwhere we face similar issues after mapfile removal for exes I'd rather remove both JNIEXPORT and JNICALL from main(). It wasn't exported, and it shouldn't be. Regards, Alexey Best regards , Matthias
Re: RFR: 8201360: Zero fails to build on linux-sparc after 8201236
On 04/10/2018 08:51 PM, Gary Adams wrote: Is any update needed for EXTRA_OBJECT_FILES? No. I just made the single change Erik suggested and it builds fine for me. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaub...@debian.org `. `' Freie Universitaet Berlin - glaub...@physik.fu-berlin.de `-GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
Re: RFR: 8201360: Zero fails to build on linux-sparc after 8201236
Is any update needed for EXTRA_OBJECT_FILES? On 4/10/18, 2:32 PM, Erik Joelsson wrote: Looks good. /Erik On 2018-04-10 11:22, John Paul Adrian Glaubitz wrote: Hi! Please review this minor change which fixes the Zero build on linux-sparc which got broken after "JDK-8201236 Straighten out dtrace build logic". The change affects the Zero build on linux-sparc only since SPARC has its own implementation of memset_with_concurrent_readers() in memset_with_\ concurrent_readers_sparc.cpp which needs to be added to $(EXTRA_FILES) by adding $(BUILD_LIBJVM_EXTRA_FILES) to $(EXTRA_FILES). Thanks, Adrian [1] http://cr.openjdk.java.net/~glaubitz/8201360/webrev.00/
Re: RFR: 8201360: Zero fails to build on linux-sparc after 8201236
Looks good. /Erik On 2018-04-10 11:22, John Paul Adrian Glaubitz wrote: Hi! Please review this minor change which fixes the Zero build on linux-sparc which got broken after "JDK-8201236 Straighten out dtrace build logic". The change affects the Zero build on linux-sparc only since SPARC has its own implementation of memset_with_concurrent_readers() in memset_with_\ concurrent_readers_sparc.cpp which needs to be added to $(EXTRA_FILES) by adding $(BUILD_LIBJVM_EXTRA_FILES) to $(EXTRA_FILES). Thanks, Adrian [1] http://cr.openjdk.java.net/~glaubitz/8201360/webrev.00/
RFR: 8201360: Zero fails to build on linux-sparc after 8201236
Hi! Please review this minor change which fixes the Zero build on linux-sparc which got broken after "JDK-8201236 Straighten out dtrace build logic". The change affects the Zero build on linux-sparc only since SPARC has its own implementation of memset_with_concurrent_readers() in memset_with_\ concurrent_readers_sparc.cpp which needs to be added to $(EXTRA_FILES) by adding $(BUILD_LIBJVM_EXTRA_FILES) to $(EXTRA_FILES). Thanks, Adrian [1] http://cr.openjdk.java.net/~glaubitz/8201360/webrev.00/ -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaub...@debian.org `. `' Freie Universitaet Berlin - glaub...@physik.fu-berlin.de `-GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
Re: Zero fails to build on SPARC again, similar to JDK-8186578
On 04/10/2018 08:04 PM, Erik Joelsson wrote: "JDK-8201236 Straighten out dtrace build logic" Aye, changeset is coming up ;). Indeed, this fixes it! Thanks so much, I was already about to give up ;). We should have been explicit with that parameter in the first place, then Magnus would not have missed it. Glad I could help. After that, I'll try to tackle the server build on linux-sparc again. Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaub...@debian.org `. `' Freie Universitaet Berlin - glaub...@physik.fu-berlin.de `-GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
Re: Zero fails to build on SPARC again, similar to JDK-8186578
On 2018-04-10 10:50, John Paul Adrian Glaubitz wrote: Hi Erik! On 04/10/2018 06:54 PM, Erik Joelsson wrote: I've found the problem. In JvmFeatures.gmk we have: ifeq ($(call check-jvm-feature, zero), true) JVM_CFLAGS_FEATURES += -DZERO -DCC_INTERP -DZERO_LIBARCH='"$(OPENJDK_TARGET_CPU_LEGACY_LIB)"' $(LIBFFI_CFLAGS) JVM_LIBS_FEATURES += $(LIBFFI_LIBS) ifeq ($(OPENJDK_TARGET_CPU), sparcv9) BUILD_LIBJVM_EXTRA_FILES := $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp endif endif The BUILD_LIBJVM_EXTRA_FILES is implicitly trying to set the EXTRA_FILES argument to the BUILD_LIBJVM SetupNativeCompilation call. This used to work because there was no setting of that parameter in the actual call. In a recent change, that parameter is not set to something else, overriding the assignment above. Aha! Do you happen to know which change was responsible for that? Then I can adjust the bug summary accordingly. "JDK-8201236 Straighten out dtrace build logic" To fix this, you need to add $(BUILD_LIBJVM_EXTRA_FILES) to the EXTRA_FILES line in CompileJvm.gmk. Indeed, this fixes it! Thanks so much, I was already about to give up ;). We should have been explicit with that parameter in the first place, then Magnus would not have missed it. Glad I could help. /Erik Adrian
Re: Zero fails to build on SPARC again, similar to JDK-8186578
Hi Erik! On 04/10/2018 06:54 PM, Erik Joelsson wrote: I've found the problem. In JvmFeatures.gmk we have: ifeq ($(call check-jvm-feature, zero), true) JVM_CFLAGS_FEATURES += -DZERO -DCC_INTERP -DZERO_LIBARCH='"$(OPENJDK_TARGET_CPU_LEGACY_LIB)"' $(LIBFFI_CFLAGS) JVM_LIBS_FEATURES += $(LIBFFI_LIBS) ifeq ($(OPENJDK_TARGET_CPU), sparcv9) BUILD_LIBJVM_EXTRA_FILES := $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp endif endif The BUILD_LIBJVM_EXTRA_FILES is implicitly trying to set the EXTRA_FILES argument to the BUILD_LIBJVM SetupNativeCompilation call. This used to work because there was no setting of that parameter in the actual call. In a recent change, that parameter is not set to something else, overriding the assignment above. Aha! Do you happen to know which change was responsible for that? Then I can adjust the bug summary accordingly. To fix this, you need to add $(BUILD_LIBJVM_EXTRA_FILES) to the EXTRA_FILES line in CompileJvm.gmk. Indeed, this fixes it! Thanks so much, I was already about to give up ;). Adrian -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaub...@debian.org `. `' Freie Universitaet Berlin - glaub...@physik.fu-berlin.de `-GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
Re: Supported platforms
Hi David, Speaking about the arm/ port, BellSoft has been releasing JCK-verified binaries (as provided under the OpenJDK license) from the arm/ port for the Raspberry Pi for JDK 9 [1] for a while and recently released one for JDK 10 [2], including OpenJFX and Minimal VM support. On Raspberry Pi 2 (ARMv7) and Raspberry Pi 3 (ARMv8 chip running Raspbian) the binaries produced from this port are passing all the required testing, including the new features recently open-sourced by Oracle (such as AppCDS). As far as JDK 11 is concerned, we are keeping track of the changes, recently fixed a couple of build issues that slipped in [3, 4], are working on Minimal Value Types support and, from what I can tell, will need to look into JFR and Nest-Based Access Control. Please let us know if we missed anything and we need to prepare for some other new features for porting. The intent is to keep the arm/ port in good shape and provide well-tested binaries for the Raspberry Pi. I believed Oracle was aware about BellSoft producing binaries from this port [5], [6]. Based on twitter, it seems like at least some engineers at Redhat and SAP are aware about it. Let me know if there is anything else we need to do to spread the word about it with Oracle engineering. For now, Boris (cced) is the engineer at BellSoft working on supporting the arm/ port for the Raspberry Pi. Other than that, I really wonder what "stepping up to take ownership of a port" means when it's upstream, and there is some procedure we need to follow. Thanks, -Aleksei [1] https://bell-sw.com/java-for-raspberry-pi-9.0.4.html [2] https://bell-sw.com/java-for-raspberry-pi.html [3] https://bugs.openjdk.java.net/browse/JDK-8200628 [4] https://bugs.openjdk.java.net/browse/JDK-8198949 [5] https://twitter.com/java/status/981239157874941955 [6] https://twitter.com/DonaldOJDK/status/981874485979746304 We are in a situation where previously "supported" platforms (by Oracle) are no longer supported as, AFAIK, no one has stepped up to take ownership of said platforms - which is a requirement for getting a new port accepted into mainline. Without such ownership the code may not only bit-rot, it may in time be stripped out completely. Any interested parties would then need to look at (re)forming a port project for that platform to keep it going in OpenJDK (or of course they are free to take it elsewhere). Cheers, David On 09/04/2018 18:35, White, Derek wrote: Hi Magnus, -Original Message- Date: Mon, 9 Apr 2018 09:55:09 +0200 From: Magnus Ihse BursieTo: Simon Nash , b...@juanantonio.info Cc: build-dev@openjdk.java.net, hotspot-dev developers Subject: Re: Supported platforms Message-ID: <4b1f262d-b9d2-6844-e453-dd53b42b2...@oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Simon, On 2018-04-08 16:30, Simon Nash wrote: On 05/04/2018 02:26, b...@juanantonio.info wrote: Many thanks with the link about the Platforms supported: http://www.oracle.com/technetwork/java/javase/documentation/jdk10cert config-4417031.html This appears to be a list of the platforms that are supported (certified) by Oracle.? Where can I find the list of platforms that are supported by OpenJDK?? For example, what about the following platforms that don't appear on the Oracle list: Windows x86 Linux x86 aarch32 (ARMv7 32-bit) aarch64 (ARMv8 64-bit) Are all these supported for OpenJDK 9, 10 and 11? There is actually no such thing as a "supported OpenJDK platform". While I hope things may change in the future, OpenJDK as an organization does not publicize any list of "supported" platforms. Oracle publishes a list of platforms they support, and I presume that Red Hat and SAP and others do the same, but the OpenJDK project itself does not. With that said, platforms which were previously supported by Oracle (like the one's you mentioned) tend to still work more-or-less well, but they receive no or little testing, and is prone to bit rot. /Magnus Surely you meant to say "receive no or little testing BY ORACLE, and ORACLE IS NOT RESPONSIBLE FOR ANY bit rot." I haven't found a definitive list of supported OpenJDK platforms, but have an ad-hoc list of publicly available binaries: - Major linux distros are supporting x64 and aarch64 (arm64), and probably other platforms. - AdoptOpenJDK provides tested builds for most 64-bit platforms (x64, aarch64, ppc64, s390). - https://adoptopenjdk.net/releases.html - Bellsoft provides support for 32-bit ARMv7. - https://bell-sw.com/products.html - Azul provides 32-bit x86 and ARMv7 binaries as well as 64-bit x86 and aarch64. - https://www.azul.com/downloads/zulu/ I'm sure there are several others I've missed - sorry! - Derek
Re: Supported platforms
David, see inline: On 10/04/2018 11:00, David Holmes wrote: Hi Aleksei, This is all news to me. Good news, but unexpected. As far as I was aware the 32-bit ARM port was dying a slow death and would eventually get removed. 64-bit ARM is of course very much alive and well under the Aarch64 porters - though I'm unclear if you are using that code for ARMv8 or the Oracle contributed code that used to be closed? You are welcome :) We are doing everything possible to keep it running, so I don't see any reason within OpenJDK to it being removed. Regarding ARMv8 port, we are working with Cavium, Redhat, and Linaro on supporting the AARCH64 port. I'm also unclear whether you are pushing changes back up to OpenJDK for these platforms, or maintaining them locally? I haven't noticed anything (other than build tweaks) and am curious about the release of a Minimal VM for JDK 10 given the Minimal VM effectively died along with the stand-alone Client VM. We push everything upstream. I'm not sure why you believe Minimal VM and Client VM died in OpenJDK 10. From what I remember, there was some decision related to Client VM for Oracle binaries, but support for C1 and Minimal VM was not removed from OpenJDK codebase. This is what I get with BellSoft Liberica binaries built from OpenJDK on Raspberry Pi: pi@rpi-3:~ $ java -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Server VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Minimal VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -client -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Client VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -XX:+PrintFlagsFinal HW | grep C1 bool C1OptimizeVirtualCallProfiling = true {C1 product} {default} bool C1ProfileBranches = true {C1 product} {default} bool C1ProfileCalls = true {C1 product} {default} bool C1ProfileCheckcasts = true {C1 product} {default} bool C1ProfileInlinedCalls = true {C1 product} {default} bool C1ProfileVirtualCalls = true {C1 product} {default} bool C1UpdateMethodData = false {C1 product} {default} bool InlineSynchronizedMethods = true {C1 product} {default} bool LIRFillDelaySlots = false {C1 pd product} {default} bool TimeLinearScan = false {C1 product} {default} bool UseLoopInvariantCodeMotion = true {C1 product} {default} intx ValueMapInitialSize = 11 {C1 product} {default} intx ValueMapMaxLoopSize = 8 {C1 product} {default} Minimal VM and Client VM include C1, and Server VM includes C1 and C2. All (Client VM, Server VM, Minimal VM) were tested and work as expected. For JDK11 you will need to do some work for Condy (if not already done) as well as JFR and Nest-based Access Control (which you can see in the nestmates branch of the Valhalla repo), as you mention below. Not sure what else may be needed. There's been a lot of code refactoring and include file changes that have impact on platform specific code as well. Thanks for the heads-up! -Aleksei Cheers, David On 10/04/2018 5:23 PM, Aleksei Voitylov wrote: Hi David, Speaking about the arm/ port, BellSoft has been releasing JCK-verified binaries (as provided under the OpenJDK license) from the arm/ port for the Raspberry Pi for JDK 9 [1] for a while and recently released one for JDK 10 [2], including OpenJFX and Minimal VM support. On Raspberry Pi 2 (ARMv7) and Raspberry Pi 3 (ARMv8 chip running Raspbian) the binaries produced from this port are passing all the required testing, including the new features recently open-sourced by Oracle (such as AppCDS). As far as JDK 11 is concerned, we are keeping track of the changes, recently fixed a couple of build issues that slipped in [3, 4], are working on Minimal Value Types support and, from what I can tell, will need to look into JFR and Nest-Based Access Control. Please let us know if we missed anything and we need to prepare for some other new features for porting. The intent is to keep the arm/ port in good shape and provide well-tested binaries
Re: Supported platforms
David, Sorry I was mis-remembering. Of course C1 and Minimal still exist in the 32-bit code. Though I would be concerned that with a focus on 64-bit it will be easy for engineers to overlook things that should be inside one of the INCLUDE_XXX guards. (Particularly as we don't do any 32-bit builds and for the most part don't even have the capability to perform them.) It seems like BellSoft is not alone in building 32-bit code so we hope the community will notice if something gets broken. That said, it does not seem like too big of a deal to have, say, Linux x86 32 bit builds running in any build farm. This should catch most of the INCLUDE_XXX types of errors. -Aleksei On 10/04/2018 12:24, David Holmes wrote: On 10/04/2018 7:17 PM, Aleksei Voitylov wrote: David, see inline: On 10/04/2018 11:00, David Holmes wrote: Hi Aleksei, This is all news to me. Good news, but unexpected. As far as I was aware the 32-bit ARM port was dying a slow death and would eventually get removed. 64-bit ARM is of course very much alive and well under the Aarch64 porters - though I'm unclear if you are using that code for ARMv8 or the Oracle contributed code that used to be closed? You are welcome :) We are doing everything possible to keep it running, so I don't see any reason within OpenJDK to it being removed. Regarding ARMv8 port, we are working with Cavium, Redhat, and Linaro on supporting the AARCH64 port. I'm also unclear whether you are pushing changes back up to OpenJDK for these platforms, or maintaining them locally? I haven't noticed anything (other than build tweaks) and am curious about the release of a Minimal VM for JDK 10 given the Minimal VM effectively died along with the stand-alone Client VM. We push everything upstream. I don't recall seeing anything related to 32-bit ARM, but perhaps it's all in areas I don't see (like compiler and gc). I'm not sure why you believe Minimal VM and Client VM died in OpenJDK 10. From what I remember, there was some decision related to Client VM for Oracle binaries, but support for C1 and Minimal VM was not removed from OpenJDK codebase. This is what I get with BellSoft Liberica binaries built from OpenJDK on Raspberry Pi: Sorry I was mis-remembering. Of course C1 and Minimal still exist in the 32-bit code. Though I would be concerned that with a focus on 64-bit it will be easy for engineers to overlook things that should be inside one of the INCLUDE_XXX guards. (Particularly as we don't do any 32-bit builds and for the most part don't even have the capability to perform them.) Cheers, David pi@rpi-3:~ $ java -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Server VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Minimal VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -client -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Client VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -XX:+PrintFlagsFinal HW | grep C1 bool C1OptimizeVirtualCallProfiling = true {C1 product} {default} bool C1ProfileBranches = true {C1 product} {default} bool C1ProfileCalls = true {C1 product} {default} bool C1ProfileCheckcasts = true {C1 product} {default} bool C1ProfileInlinedCalls = true {C1 product} {default} bool C1ProfileVirtualCalls = true {C1 product} {default} bool C1UpdateMethodData = false {C1 product} {default} bool InlineSynchronizedMethods = true {C1 product} {default} bool LIRFillDelaySlots = false {C1 pd product} {default} bool TimeLinearScan = false {C1 product} {default} bool UseLoopInvariantCodeMotion = true {C1 product} {default} intx ValueMapInitialSize = 11 {C1 product} {default} intx ValueMapMaxLoopSize = 8 {C1 product} {default} Minimal VM and Client VM include C1, and Server VM includes C1 and C2. All (Client VM, Server VM, Minimal VM) were tested and work as expected. For JDK11 you will need to do some work for Condy (if not already done) as well as JFR and Nest-based Access Control (which you can see
Re: Zero fails to build on SPARC again, similar to JDK-8186578
I've found the problem. In JvmFeatures.gmk we have: ifeq ($(call check-jvm-feature, zero), true) JVM_CFLAGS_FEATURES += -DZERO -DCC_INTERP -DZERO_LIBARCH='"$(OPENJDK_TARGET_CPU_LEGACY_LIB)"' $(LIBFFI_CFLAGS) JVM_LIBS_FEATURES += $(LIBFFI_LIBS) ifeq ($(OPENJDK_TARGET_CPU), sparcv9) BUILD_LIBJVM_EXTRA_FILES := $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp endif endif The BUILD_LIBJVM_EXTRA_FILES is implicitly trying to set the EXTRA_FILES argument to the BUILD_LIBJVM SetupNativeCompilation call. This used to work because there was no setting of that parameter in the actual call. In a recent change, that parameter is not set to something else, overriding the assignment above. To fix this, you need to add $(BUILD_LIBJVM_EXTRA_FILES) to the EXTRA_FILES line in CompileJvm.gmk. /Erik On 2018-04-10 04:58, John Paul Adrian Glaubitz wrote: On 04/10/2018 01:37 PM, John Paul Adrian Glaubitz wrote: @buildd-dev: I need to build memset_with_concurrent_readers_sparc.cpp for Zero on SPARC as the Zero build now bails out with linker errors: Add the source file in question to EXTRA_FILES: glaubitz@deb4g:/srv/glaubitz/hs$ hg diff diff -r b3c09ab95c1a make/hotspot/lib/CompileGtest.gmk --- a/make/hotspot/lib/CompileGtest.gmk Tue Apr 10 12:21:58 2018 +0200 +++ b/make/hotspot/lib/CompileGtest.gmk Tue Apr 10 14:57:05 2018 +0300 @@ -71,7 +71,8 @@ EXCLUDES := $(JVM_EXCLUDES), \ EXCLUDE_FILES := gtestLauncher.cpp, \ EXCLUDE_PATTERNS := $(JVM_EXCLUDE_PATTERNS), \ - EXTRA_FILES := $(GTEST_FRAMEWORK_SRC)/src/gtest-all.cc, \ + EXTRA_FILES := $(GTEST_FRAMEWORK_SRC)/src/gtest-all.cc \ + $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp, \ EXTRA_OBJECT_FILES := $(filter-out %/operator_new$(OBJ_SUFFIX), \ $(BUILD_LIBJVM_ALL_OBJS)), \ CFLAGS := $(JVM_CFLAGS) -I$(GTEST_FRAMEWORK_SRC) \ @@ -109,7 +110,8 @@ NAME := gtestLauncher, \ TYPE := EXECUTABLE, \ OUTPUT_DIR := $(JVM_OUTPUTDIR)/gtest, \ - EXTRA_FILES := $(GTEST_LAUNCHER_SRC), \ + EXTRA_FILES := $(GTEST_LAUNCHER_SRC) \ + $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp, \ OBJECT_DIR := $(JVM_OUTPUTDIR)/gtest/launcher-objs, \ CFLAGS := $(JVM_CFLAGS) -I$(GTEST_FRAMEWORK_SRC) \ -I$(GTEST_FRAMEWORK_SRC)/include, \ glaubitz@deb4g:/srv/glaubitz/hs$ Causes the object files to be built. But for some reason, the linker is not picking up those object files even though they are located in the object directories of gtest: glaubitz@deb4g:/srv/glaubitz/hs$ find . -name "memset_with_concurrent_readers_sparc.o" ./build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/gtest/objs/memset_with_concurrent_readers_sparc.o ./build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/gtest/launcher-objs/memset_with_concurrent_readers_sparc.o glaubitz@deb4g:/srv/glaubitz/hs$ Adrian
Re: Supported platforms
Hi all, Apologies for being late to the thread. We're looking to provide a central place for OpenJDK binaries to be produced and have some level of community support for the more esoteric platforms. Perhaps this is a place where you can get some extra assistance in keeping a particular port vibrant (having regularly tested builds helps)p. See the following for details: * https://www.github.com/adoptopejdk/TSC - Build Farm overview * https://www.adoptopenjdk.net (end user site) * https://www.adoptopenjdk.net/slack (join ~200 volunteers!) * https://ci.adoptopenjdk.net (Jenkins cluster) It's still a work in progress but many of the OpenJDK vendors you know of today are involved (Oracle, IBM, Red Hat, SAP et al) and so we're hopeful that the collaboration there will help the more esoteric platforms that do gain genuine popularity and support can flourish without adding further burden on a single vendor. We'll be following a very strong policy of reporting and submitting patches upstream to OpenJDK / OpenJ9 / , exactly how it works today with various porting and downstream implementations today. This isn't strictly an OpenJDK effort, so please do head over to https://www.adoptopenjdk.net/slack to find out more or ping the adoption-discuss mailing list here at OpenJDK. Cheers, Martijn On 10 April 2018 at 10:24, David Holmeswrote: > On 10/04/2018 7:17 PM, Aleksei Voitylov wrote: > >> David, >> >> see inline: >> >> >> On 10/04/2018 11:00, David Holmes wrote: >> >>> Hi Aleksei, >>> >>> This is all news to me. Good news, but unexpected. As far as I was aware >>> the 32-bit ARM port was dying a slow death and would eventually get >>> removed. 64-bit ARM is of course very much alive and well under the Aarch64 >>> porters - though I'm unclear if you are using that code for ARMv8 or the >>> Oracle contributed code that used to be closed? >>> >> You are welcome :) We are doing everything possible to keep it running, >> so I don't see any reason within OpenJDK to it being removed. Regarding >> ARMv8 port, we are working with Cavium, Redhat, and Linaro on supporting >> the AARCH64 port. >> >>> >>> I'm also unclear whether you are pushing changes back up to OpenJDK for >>> these platforms, or maintaining them locally? I haven't noticed anything >>> (other than build tweaks) and am curious about the release of a Minimal VM >>> for JDK 10 given the Minimal VM effectively died along with the stand-alone >>> Client VM. >>> >> We push everything upstream. >> > > I don't recall seeing anything related to 32-bit ARM, but perhaps it's all > in areas I don't see (like compiler and gc). > > I'm not sure why you believe Minimal VM and Client VM died in OpenJDK 10. >> From what I remember, there was some decision related to Client VM for >> Oracle binaries, but support for C1 and Minimal VM was not removed from >> OpenJDK codebase. This is what I get with BellSoft Liberica binaries built >> from OpenJDK on Raspberry Pi: >> > > Sorry I was mis-remembering. Of course C1 and Minimal still exist in the > 32-bit code. Though I would be concerned that with a focus on 64-bit it > will be easy for engineers to overlook things that should be inside one of > the INCLUDE_XXX guards. (Particularly as we don't do any 32-bit builds and > for the most part don't even have the capability to perform them.) > > Cheers, > David > > pi@rpi-3:~ $ java -version >>> openjdk version "10-BellSoft" 2018-03-20 >>> OpenJDK Runtime Environment (build 10-BellSoft+0) >>> OpenJDK Server VM (build 10-BellSoft+0, mixed mode) >>> pi@rpi-3:~ $ java -minimal -version >>> openjdk version "10-BellSoft" 2018-03-20 >>> OpenJDK Runtime Environment (build 10-BellSoft+0) >>> OpenJDK Minimal VM (build 10-BellSoft+0, mixed mode) >>> pi@rpi-3:~ $ java -client -version >>> openjdk version "10-BellSoft" 2018-03-20 >>> OpenJDK Runtime Environment (build 10-BellSoft+0) >>> OpenJDK Client VM (build 10-BellSoft+0, mixed mode) >>> >> >> pi@rpi-3:~ $ java -minimal -XX:+PrintFlagsFinal HW | grep C1 >>> bool C1OptimizeVirtualCallProfiling = >>> true {C1 product} {default} >>> bool C1ProfileBranches= >>> true {C1 product} {default} >>> bool C1ProfileCalls = >>> true {C1 product} {default} >>> bool C1ProfileCheckcasts = >>> true {C1 product} {default} >>> bool C1ProfileInlinedCalls= >>> true {C1 product} {default} >>> bool C1ProfileVirtualCalls= >>> true {C1 product} {default} >>> bool C1UpdateMethodData = >>> false {C1 product} {default} >>> bool InlineSynchronizedMethods= >>> true {C1 product}
Re: RFR: 8196516: libfontmanager must be built with LDFLAGS allowing unresolved symbols
On 2018-04-10 04:30, Severin Gehwolf wrote: On Mon, 2018-04-09 at 15:39 +0200, Severin Gehwolf wrote: Hi, Could somebody please review this build fix for libfontmanager.so. The issue for us is that with some LDFLAGS the build breaks as described in bug JDK-8196218. However, we cannot link to a providing library at build-time since we don't know which one it should be: libawt_headless or libawt_xawt. That has to happen at runtime. The proposed fix filters out relevant linker flags when libfontmanager is being built. More details are in the bug. Bug: https://bugs.openjdk.java.net/browse/JDK-8196516 Latest webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.02/ Testing: I've run this through submit[1] and got the following results. SwingSet2 works fine for me on F27. I'm currently running some more tests on RHEL 7. I've finished testing on RHEL 7 by manually verifying that not both libawt_xawt and libawt_headless get loaded when running SwingSet. Could somebody tell me what this failure below is about? This was an internal infrastructure failure. I would say it's safe to ignore. /Erik Thanks, Severin - Mach5 mach5-one-sgehwolf-JDK-8196516-20180409-1036-17877: Builds PASSED. Testing FAILURE. 0 Failed Tests Mach5 Tasks Results Summary NA: 0 UNABLE_TO_RUN: 0 EXECUTED_WITH_FAILURE: 0 KILLED: 0 PASSED: 82 FAILED: 1 Test 1 Failed tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-windows-x64- debug-31 SetupFailedException in setup...profile run-test-prebuilt' , return value: 10 Not sure what this test failure means. Could somebody at Oracle shed some light on this? Thanks, Severin
Re: RFR: 8196516: libfontmanager must be built with LDFLAGS allowing unresolved symbols
Looks good. Thanks! /Erik On 2018-04-10 04:25, Severin Gehwolf wrote: Hi Erik, On Mon, 2018-04-09 at 09:20 -0700, Erik Joelsson wrote: Hello Severin, I'm ok with this solution for now. Thanks for the review! Could you please reduce the indentation on line 652. In the build system we like 4 spaces for continuation indent [1] Done. New webrev at: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.02 Could someone from awt-dev have a look at this too? Thanks! Cheers, Severin /Erik [1] http://openjdk.java.net/groups/build/doc/code-conventions.html On 2018-04-09 06:39, Severin Gehwolf wrote: Hi, Could somebody please review this build fix for libfontmanager.so. The issue for us is that with some LDFLAGS the build breaks as described in bug JDK-8196218. However, we cannot link to a providing library at build-time since we don't know which one it should be: libawt_headless or libawt_xawt. That has to happen at runtime. The proposed fix filters out relevant linker flags when libfontmanager is being built. More details are in the bug. Bug: https://bugs.openjdk.java.net/browse/JDK-8196516 webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.01/ Testing: I've run this through submit[1] and got the following results. SwingSet2 works fine for me on F27. I'm currently running some more tests on RHEL 7. - Mach5 mach5-one-sgehwolf-JDK-8196516-20180409-1036-17877: Builds PASSED. Testing FAILURE. 0 Failed Tests Mach5 Tasks Results Summary NA: 0 UNABLE_TO_RUN: 0 EXECUTED_WITH_FAILURE: 0 KILLED: 0 PASSED: 82 FAILED: 1 Test 1 Failed tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-windows-x64- debug-31 SetupFailedException in setup...profile run-test-prebuilt' , return value: 10 Not sure what this test failure means. Could somebody at Oracle shed some light on this? Thanks, Severin
Re: [11] RFR: 8189784: Parsing with Java 9 AKST timezone returns the SystemV variant of the timezone
This looks very good. Thanks! (reviewed build part) /Erik On 2018-04-09 18:00, naoto.s...@oracle.com wrote: Thanks, Erik. Modified GensrcCLDR.gmk as suggested: http://cr.openjdk.java.net/~naoto/8189784/webrev.03/ Naoto On 4/9/18 4:15 PM, Erik Joelsson wrote: Hello Naoto, Looks good, just a style issue. When breaking a recipe line, please add 4 additional spaces (after the tab) for continuation indent [1]. In this case I would also advocate breaking the line sooner to make side by side comparisons easier in the future. /Erik [1] http://openjdk.java.net/groups/build/doc/code-conventions.html On 2018-04-09 13:20, Naoto Sato wrote: Hello, Please review the changes to the following issue: https://bugs.openjdk.java.net/browse/JDK-8189784 The proposed changeset is located at: http://cr.openjdk.java.net/~naoto/8189784/webrev.02/ There were two causes of the issue. One was that j.t.f.ZoneName contained hard coded mappings based on the old CLDR data and never been updated. The other reason was that CLDR's deprecated zones ("SystemV" ones, in this case) were not properly replaced. I am including build-dev for the review, as the change includes generating ZoneName mapping at the build time from the template. Naoto
Re: 8201226 missing JNIEXPORT / JNICALL at some places in function declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at some places in function declarations/implementations
> 10 apr. 2018 kl. 12:14 skrev Baesken, Matthias: > > Hello, I had to do another small adjustment to make jimage.hpp/cpp match. >Please review : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8201226.2/ Looks good to me. /Magnus > > With the latest webrev I could finallybuildjdk/jdk successfully on > both win32bit and win64 bit . > > > > Thanks again to Alexey to provide the incorporated patch . > > > Best regards, Matthias > > > >> -Original Message- >> From: Alexey Ivanov [mailto:alexey.iva...@oracle.com] >> Sent: Montag, 9. April 2018 17:14 >> To: Baesken, Matthias ; Magnus Ihse Bursie >> >> Cc: build-dev ; Doerr, Martin >> >> Subject: Re: 8201226 missing JNIEXPORT / JNICALL at some places in function >> declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at >> some places in function declarations/implementations >> >> Hi Matthias, >> >>> On 09/04/2018 15:38, Baesken, Matthias wrote: >>> Hi Alexey,thanks for the diff provided by you, and for the >>> explanations >> . >>> >>> I created a second webrev : >>> >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8201226.1/ >>> >>> - it adds the diff provided by you(hope that’s fine with you) >> >> Yes, that's fine with me. >> There could be only one author ;) >> >>> -changes 2 launcherssrc/java.base/share/native/launcher/main.c >>> and >> src/jdk.pack/share/native/unpack200/main.cppwhere we face similar >> issues after mapfile removal for exes >> >> I'd rather remove both JNIEXPORT and JNICALL from main(). >> It wasn't exported, and it shouldn't be. >> >> Regards, >> Alexey >> >>> >>> >>> >>> Best regards , Matthias >>> >>> -Original Message- From: Alexey Ivanov [mailto:alexey.iva...@oracle.com] Sent: Montag, 9. April 2018 15:52 To: Magnus Ihse Bursie ; Baesken, Matthias Cc: build-dev ; Doerr, Martin Subject: Re: 8201226 missing JNIEXPORT / JNICALL at some places in >> function declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at some places in function declarations/implementations Hi Matthias, Magnus, The problem is with JNICALL added to the functions. JNICALL expands to __stdcall [1] on Windows. On 64 bit, the modifier has no effect and is ignored. On 32 bit, the modifier changes the way parameters are pass and the function name is decorated with an underscore and with @ + the size of arguments. If I remove JNICALL modifier from the exported functions, they're exported with plain name as in source code (plus, __cdecl [2] calling convention is used.) zip.dll and jimage.dll are affected by this. It's because the exported functions are looked up by name rather than using a header file with JNIIMPORT. See >> http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla ssLoader.cpp#l1155 >> http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla ssLoader.cpp#l1194 JNICALL modifier must also be removed from type declarations for functions from zip.dll: >> http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla ssLoader.cpp#l81 I'm attaching the patch to zip and jimage as well as classLoader.cpp which takes these changes into account. It does not replace Matthias' patch but complements it. Alternatively, if keeping JNICALL / __stdcall, it's possible to use pragma comments [3][4] to export undecorated names. But this is >> compiler specific and may require if's for other platforms. Regards, Alexey [1] https://msdn.microsoft.com/en-us/library/zxk0tw93.aspx [2] https://msdn.microsoft.com/en-us/library/zkwh89ks.aspx [3] https://docs.microsoft.com/en-ie/cpp/build/reference/exports [4] https://docs.microsoft.com/en-ie/cpp/preprocessor/comment-c-cpp > On 09/04/2018 12:42, Magnus Ihse Bursie wrote: > Those were added by my patch that removed the map files. > > /Magnus > >> 9 apr. 2018 kl. 13:38 skrev Baesken, Matthias : >> I did not add JNICALL decorations to any libzip functions , please >> see my webrev : >> http://cr.openjdk.java.net/~mbaesken/webrevs/8201226/ >> >>> , the problem here >>> is the added JNICALL decoration, which affects only win32 (and incorrectly, >>> that is). >> so I wonder which added JNICALL decoration you are refering to . >> >> Best regards, Matthias >> >> >>> -Original
JDK 11 hotspot build fails with "Undefined symbol" on AIX
libjvm.loadmap Description: Binary data
Re: Zero fails to build on SPARC again, similar to JDK-8186578
On 04/10/2018 01:34 PM, David Holmes wrote: Adding in build-dev. I think the build guys need to help you figure this out. Good idea. @buildd-dev: I need to build memset_with_concurrent_readers_sparc.cpp for Zero on SPARC as the Zero build now bails out with linker errors: === Output from failing command(s) repeated here === /usr/bin/printf "* For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link:\n" * For target hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link: (/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link.log || true) | /usr/bin/head -n 12 /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/gtest/objs/test_memset_with_concurrent_readers.o: In function `gc_memset_with_concurrent_readers_test_Test::TestBody()': /srv/glaubitz/hs/test/hotspot/gtest/gc/shared/test_memset_with_concurrent_readers.cpp:66: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetArray::single_block(HeapWord*, HeapWord*)': /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetArrayNonContigSpace::alloc_block(HeapWord*, HeapWord*)': /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o:/srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: more undefined references to `memset_with_concurrent_readers(void*, int, unsigned long)' follow collect2: error: ld returned 1 exit status if test `/usr/bin/wc -l < /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_gtest_objs_BUILD_GTEST_LIBJVM_link.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link:\n" * For target hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link: (/bin/grep -v -e "^Note: including file:" < /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link.log || true) | /usr/bin/head -n 12 /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetArray::single_block(HeapWord*, HeapWord*)': /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetArrayNonContigSpace::alloc_block(HeapWord*, HeapWord*)': /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o: In function `BlockOffsetArray::BlockOffsetArray(BlockOffsetSharedArray*, MemRegion, bool)': /srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: undefined reference to `memset_with_concurrent_readers(void*, int, unsigned long)' /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/objs/blockOffsetTable.o:/srv/glaubitz/hs/src/hotspot/share/gc/shared/blockOffsetTable.hpp:160: more undefined references to `memset_with_concurrent_readers(void*, int, unsigned long)' follow collect2: error: ld returned 1 exit status if test `/usr/bin/wc -l < /srv/glaubitz/hs/build/linux-sparcv9-normal-zero-release/make-support/failure-logs/hotspot_variant-zero_libjvm_objs_BUILD_LIBJVM_link.log` -gt 12; then /bin/echo " ... (rest of output omitted)" ; fi /usr/bin/printf "* For target
Re: Zero fails to build on SPARC again, similar to JDK-8186578
Adding in build-dev. I think the build guys need to help you figure this out. Cheers, David On 10/04/2018 9:21 PM, John Paul Adrian Glaubitz wrote: On 04/10/2018 01:00 PM, John Paul Adrian Glaubitz wrote: Trying to add the necessary source there now. Hmm, I tried various ways of adding it now: diff -r a47d1e21b3f1 make/hotspot/lib/CompileGtest.gmk --- a/make/hotspot/lib/CompileGtest.gmk Thu Apr 05 10:54:53 2018 +0200 +++ b/make/hotspot/lib/CompileGtest.gmk Tue Apr 10 14:15:36 2018 +0300 @@ -52,6 +52,14 @@ $(call create-mapfile) endif +ifeq ($(call check-jvm-feature, zero), true) + JVM_CFLAGS_FEATURES += -DZERO -DCC_INTERP -DZERO_LIBARCH='"$(OPENJDK_TARGET_CPU_LEGACY_LIB)"' $(LIBFFI_CFLAGS) + JVM_LIBS_FEATURES += $(LIBFFI_LIBS) + ifeq ($(OPENJDK_TARGET_CPU), sparcv9) + BUILD_LIBJVM_EXTRA_FILES := $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp + endif +endif + # Disabling undef, switch, format-nonliteral and tautological-undefined-compare # warnings for clang because of test source. @@ -71,7 +79,8 @@ EXCLUDES := $(JVM_EXCLUDES), \ EXCLUDE_FILES := gtestLauncher.cpp, \ EXCLUDE_PATTERNS := $(JVM_EXCLUDE_PATTERNS), \ - EXTRA_FILES := $(GTEST_FRAMEWORK_SRC)/src/gtest-all.cc, \ + EXTRA_FILES := $(GTEST_FRAMEWORK_SRC)/src/gtest-all.cc \ + $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp, \ EXTRA_OBJECT_FILES := $(filter-out %/operator_new$(OBJ_SUFFIX), \ $(BUILD_LIBJVM_ALL_OBJS)), \ CFLAGS := $(JVM_CFLAGS) -I$(GTEST_FRAMEWORK_SRC) \ @@ -109,7 +118,8 @@ NAME := gtestLauncher, \ TYPE := EXECUTABLE, \ OUTPUT_DIR := $(JVM_OUTPUTDIR)/gtest, \ - EXTRA_FILES := $(GTEST_LAUNCHER_SRC), \ + EXTRA_FILES := $(GTEST_LAUNCHER_SRC) \ + $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp, \ OBJECT_DIR := $(JVM_OUTPUTDIR)/gtest/launcher-objs, \ CFLAGS := $(JVM_CFLAGS) -I$(GTEST_FRAMEWORK_SRC) \ -I$(GTEST_FRAMEWORK_SRC)/include, \ diff -r a47d1e21b3f1 make/hotspot/lib/CompileJvm.gmk --- a/make/hotspot/lib/CompileJvm.gmk Thu Apr 05 10:54:53 2018 +0200 +++ b/make/hotspot/lib/CompileJvm.gmk Tue Apr 10 14:15:36 2018 +0300 @@ -197,6 +197,14 @@ endif endif +ifeq ($(call check-jvm-feature, zero), true) + JVM_CFLAGS_FEATURES += -DZERO -DCC_INTERP -DZERO_LIBARCH='"$(OPENJDK_TARGET_CPU_LEGACY_LIB)"' $(LIBFFI_CFLAGS) + JVM_LIBS_FEATURES += $(LIBFFI_LIBS) + ifeq ($(OPENJDK_TARGET_CPU), sparcv9) + BUILD_LIBJVM_EXTRA_FILES := $(TOPDIR)/src/hotspot/cpu/sparc/memset_with_concurrent_readers_sparc.cpp + endif +endif + ifeq ($(OPENJDK_TARGET_OS), windows) ifeq ($(OPENJDK_TARGET_CPU_BITS), 64) RC_DESC := 64-Bit$(SPACE) It does build it, but the build system is unable to find the object files: glaubitz@deb4g:/srv/glaubitz/hs$ find . -name "memset_with_concurrent_readers_sparc.o" ./build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/gtest/objs/memset_with_concurrent_readers_sparc.o ./build/linux-sparcv9-normal-zero-release/hotspot/variant-zero/libjvm/gtest/launcher-objs/memset_with_concurrent_readers_sparc.o glaubitz@deb4g:/srv/glaubitz/hs$ Adrian
Re: RFR: 8196516: libfontmanager must be built with LDFLAGS allowing unresolved symbols
On Mon, 2018-04-09 at 15:39 +0200, Severin Gehwolf wrote: > Hi, > > Could somebody please review this build fix for libfontmanager.so. The > issue for us is that with some LDFLAGS the build breaks as described in > bug JDK-8196218. However, we cannot link to a providing library at > build-time since we don't know which one it should be: libawt_headless > or libawt_xawt. That has to happen at runtime. The proposed fix filters > out relevant linker flags when libfontmanager is being built. More > details are in the bug. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8196516 Latest webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.02/ > Testing: I've run this through submit[1] and got the following results. > SwingSet2 works fine for me on F27. I'm currently running some more > tests on RHEL 7. I've finished testing on RHEL 7 by manually verifying that not both libawt_xawt and libawt_headless get loaded when running SwingSet. Could somebody tell me what this failure below is about? Thanks, Severin > - > Mach5 mach5-one-sgehwolf-JDK-8196516-20180409-1036-17877: Builds PASSED. > Testing FAILURE. > > 0 Failed Tests > > Mach5 Tasks Results Summary > > NA: 0 > UNABLE_TO_RUN: 0 > EXECUTED_WITH_FAILURE: 0 > KILLED: 0 > PASSED: 82 > FAILED: 1 > Test > > 1 Failed > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-windows-x64- > debug-31 SetupFailedException in setup...profile run-test-prebuilt' , > return value: 10 > > > Not sure what this test failure means. Could somebody at Oracle shed > some light on this? > > Thanks, > Severin
Re: RFR: 8196516: libfontmanager must be built with LDFLAGS allowing unresolved symbols
Hi Erik, On Mon, 2018-04-09 at 09:20 -0700, Erik Joelsson wrote: > Hello Severin, > > I'm ok with this solution for now. Thanks for the review! > Could you please reduce the indentation on line 652. In the build system > we like 4 spaces for continuation indent [1] Done. New webrev at: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.02 Could someone from awt-dev have a look at this too? Thanks! Cheers, Severin > /Erik > > [1] http://openjdk.java.net/groups/build/doc/code-conventions.html > > On 2018-04-09 06:39, Severin Gehwolf wrote: > > Hi, > > > > Could somebody please review this build fix for libfontmanager.so. The > > issue for us is that with some LDFLAGS the build breaks as described in > > bug JDK-8196218. However, we cannot link to a providing library at > > build-time since we don't know which one it should be: libawt_headless > > or libawt_xawt. That has to happen at runtime. The proposed fix filters > > out relevant linker flags when libfontmanager is being built. More > > details are in the bug. > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8196516 > > webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8196516/webrev.01/ > > > > Testing: I've run this through submit[1] and got the following results. > > SwingSet2 works fine for me on F27. I'm currently running some more > > tests on RHEL 7. > > > > - > > Mach5 mach5-one-sgehwolf-JDK-8196516-20180409-1036-17877: Builds PASSED. > > Testing FAILURE. > > > > 0 Failed Tests > > > > Mach5 Tasks Results Summary > > > > NA: 0 > > UNABLE_TO_RUN: 0 > > EXECUTED_WITH_FAILURE: 0 > > KILLED: 0 > > PASSED: 82 > > FAILED: 1 > > Test > > > > 1 Failed > > > > tier1-debug-jdk_open_test_hotspot_jtreg_tier1_compiler_2-windows-x64- > > debug-31 SetupFailedException in setup...profile run-test-prebuilt' , > > return value: 10 > > > > > > Not sure what this test failure means. Could somebody at Oracle shed > > some light on this? > > > > Thanks, > > Severin > >
RE: 8201226 missing JNIEXPORT / JNICALL at some places in function declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at some places in function declarations/implementations
Hi Matthias, thank you for providing the fix. I think these issues should really get fixed regardless if 32 bit is supported or not. It's not good to have inconsistent JNI declarations. The fix looks good to me. I guess it should get pushed to the submission forest because it contains hotspot changes. Or did Oracle run the necessary tests? Best regards, Martin -Original Message- From: Baesken, Matthias Sent: Dienstag, 10. April 2018 12:15 To: Alexey Ivanov; Magnus Ihse Bursie Cc: build-dev ; Doerr, Martin Subject: RE: 8201226 missing JNIEXPORT / JNICALL at some places in function declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at some places in function declarations/implementations Hello, I had to do another small adjustment to make jimage.hpp/cpp match. Please review : http://cr.openjdk.java.net/~mbaesken/webrevs/8201226.2/ With the latest webrev I could finallybuildjdk/jdk successfully on both win32bit and win64 bit . Thanks again to Alexey to provide the incorporated patch . Best regards, Matthias > -Original Message- > From: Alexey Ivanov [mailto:alexey.iva...@oracle.com] > Sent: Montag, 9. April 2018 17:14 > To: Baesken, Matthias ; Magnus Ihse Bursie > > Cc: build-dev ; Doerr, Martin > > Subject: Re: 8201226 missing JNIEXPORT / JNICALL at some places in function > declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at > some places in function declarations/implementations > > Hi Matthias, > > On 09/04/2018 15:38, Baesken, Matthias wrote: > > Hi Alexey,thanks for the diff provided by you, and for the > > explanations > . > > > > I created a second webrev : > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8201226.1/ > > > > - it adds the diff provided by you(hope that’s fine with you) > > Yes, that's fine with me. > There could be only one author ;) > > > -changes 2 launcherssrc/java.base/share/native/launcher/main.c > > and > src/jdk.pack/share/native/unpack200/main.cppwhere we face similar > issues after mapfile removal for exes > > I'd rather remove both JNIEXPORT and JNICALL from main(). > It wasn't exported, and it shouldn't be. > > Regards, > Alexey > > > > > > > > > Best regards , Matthias > > > > > >> -Original Message- > >> From: Alexey Ivanov [mailto:alexey.iva...@oracle.com] > >> Sent: Montag, 9. April 2018 15:52 > >> To: Magnus Ihse Bursie ; Baesken, > >> Matthias > >> Cc: build-dev ; Doerr, Martin > >> > >> Subject: Re: 8201226 missing JNIEXPORT / JNICALL at some places in > function > >> declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at > >> some places in function declarations/implementations > >> > >> Hi Matthias, Magnus, > >> > >> The problem is with JNICALL added to the functions. JNICALL expands to > >> __stdcall [1] on Windows. On 64 bit, the modifier has no effect and is > >> ignored. On 32 bit, the modifier changes the way parameters are pass and > >> the function name is decorated with an underscore and with @ + the size > >> of arguments. > >> > >> If I remove JNICALL modifier from the exported functions, they're > >> exported with plain name as in source code (plus, __cdecl [2] calling > >> convention is used.) > >> > >> zip.dll and jimage.dll are affected by this. It's because the exported > >> functions are looked up by name rather than using a header file with > >> JNIIMPORT. See > >> > http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla > >> ssLoader.cpp#l1155 > >> > http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla > >> ssLoader.cpp#l1194 > >> > >> JNICALL modifier must also be removed from type declarations for > >> functions from zip.dll: > >> > http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla > >> ssLoader.cpp#l81 > >> > >> I'm attaching the patch to zip and jimage as well as classLoader.cpp > >> which takes these changes into account. It does not replace Matthias' > >> patch but complements it. > >> > >> > >> Alternatively, if keeping JNICALL / __stdcall, it's possible to use > >> pragma comments [3][4] to export undecorated names. But this is > compiler > >> specific and may require if's for other platforms. > >> > >> > >> Regards, > >> Alexey > >> > >> [1] https://msdn.microsoft.com/en-us/library/zxk0tw93.aspx > >> [2] https://msdn.microsoft.com/en-us/library/zkwh89ks.aspx > >> [3] https://docs.microsoft.com/en-ie/cpp/build/reference/exports > >> [4] https://docs.microsoft.com/en-ie/cpp/preprocessor/comment-c-cpp > >> > >> On 09/04/2018 12:42, Magnus Ihse Bursie wrote: > >>>
RE: 8201226 missing JNIEXPORT / JNICALL at some places in function declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at some places in function declarations/implementations
Hello, I had to do another small adjustment to make jimage.hpp/cpp match. Please review : http://cr.openjdk.java.net/~mbaesken/webrevs/8201226.2/ With the latest webrev I could finallybuildjdk/jdk successfully on both win32bit and win64 bit . Thanks again to Alexey to provide the incorporated patch . Best regards, Matthias > -Original Message- > From: Alexey Ivanov [mailto:alexey.iva...@oracle.com] > Sent: Montag, 9. April 2018 17:14 > To: Baesken, Matthias; Magnus Ihse Bursie > > Cc: build-dev ; Doerr, Martin > > Subject: Re: 8201226 missing JNIEXPORT / JNICALL at some places in function > declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at > some places in function declarations/implementations > > Hi Matthias, > > On 09/04/2018 15:38, Baesken, Matthias wrote: > > Hi Alexey,thanks for the diff provided by you, and for the > > explanations > . > > > > I created a second webrev : > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8201226.1/ > > > > - it adds the diff provided by you(hope that’s fine with you) > > Yes, that's fine with me. > There could be only one author ;) > > > -changes 2 launcherssrc/java.base/share/native/launcher/main.c > > and > src/jdk.pack/share/native/unpack200/main.cppwhere we face similar > issues after mapfile removal for exes > > I'd rather remove both JNIEXPORT and JNICALL from main(). > It wasn't exported, and it shouldn't be. > > Regards, > Alexey > > > > > > > > > Best regards , Matthias > > > > > >> -Original Message- > >> From: Alexey Ivanov [mailto:alexey.iva...@oracle.com] > >> Sent: Montag, 9. April 2018 15:52 > >> To: Magnus Ihse Bursie ; Baesken, > >> Matthias > >> Cc: build-dev ; Doerr, Martin > >> > >> Subject: Re: 8201226 missing JNIEXPORT / JNICALL at some places in > function > >> declarations/implementations - was : RE: missing JNIEXPORT / JNICALL at > >> some places in function declarations/implementations > >> > >> Hi Matthias, Magnus, > >> > >> The problem is with JNICALL added to the functions. JNICALL expands to > >> __stdcall [1] on Windows. On 64 bit, the modifier has no effect and is > >> ignored. On 32 bit, the modifier changes the way parameters are pass and > >> the function name is decorated with an underscore and with @ + the size > >> of arguments. > >> > >> If I remove JNICALL modifier from the exported functions, they're > >> exported with plain name as in source code (plus, __cdecl [2] calling > >> convention is used.) > >> > >> zip.dll and jimage.dll are affected by this. It's because the exported > >> functions are looked up by name rather than using a header file with > >> JNIIMPORT. See > >> > http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla > >> ssLoader.cpp#l1155 > >> > http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla > >> ssLoader.cpp#l1194 > >> > >> JNICALL modifier must also be removed from type declarations for > >> functions from zip.dll: > >> > http://hg.openjdk.java.net/jdk/client/file/tip/src/hotspot/share/classfile/cla > >> ssLoader.cpp#l81 > >> > >> I'm attaching the patch to zip and jimage as well as classLoader.cpp > >> which takes these changes into account. It does not replace Matthias' > >> patch but complements it. > >> > >> > >> Alternatively, if keeping JNICALL / __stdcall, it's possible to use > >> pragma comments [3][4] to export undecorated names. But this is > compiler > >> specific and may require if's for other platforms. > >> > >> > >> Regards, > >> Alexey > >> > >> [1] https://msdn.microsoft.com/en-us/library/zxk0tw93.aspx > >> [2] https://msdn.microsoft.com/en-us/library/zkwh89ks.aspx > >> [3] https://docs.microsoft.com/en-ie/cpp/build/reference/exports > >> [4] https://docs.microsoft.com/en-ie/cpp/preprocessor/comment-c-cpp > >> > >> On 09/04/2018 12:42, Magnus Ihse Bursie wrote: > >>> Those were added by my patch that removed the map files. > >>> > >>> /Magnus > >>> > 9 apr. 2018 kl. 13:38 skrev Baesken, Matthias > >> : > I did not add JNICALL decorations to any libzip functions , please > see > >> my webrev : > http://cr.openjdk.java.net/~mbaesken/webrevs/8201226/ > > > , the problem here > > is the added JNICALL decoration, which affects only win32 (and > >> incorrectly, > > that is). > so I wonder which added JNICALL decoration you are refering to . > > Best regards, Matthias > > > > -Original Message- > > From: Magnus Ihse Bursie [mailto:magnus.ihse.bur...@oracle.com] > > Sent: Montag, 9. April 2018 13:29 > > To: Baesken, Matthias > > Cc: Alexey Ivanov
Re: Supported platforms
On 10/04/2018 7:17 PM, Aleksei Voitylov wrote: David, see inline: On 10/04/2018 11:00, David Holmes wrote: Hi Aleksei, This is all news to me. Good news, but unexpected. As far as I was aware the 32-bit ARM port was dying a slow death and would eventually get removed. 64-bit ARM is of course very much alive and well under the Aarch64 porters - though I'm unclear if you are using that code for ARMv8 or the Oracle contributed code that used to be closed? You are welcome :) We are doing everything possible to keep it running, so I don't see any reason within OpenJDK to it being removed. Regarding ARMv8 port, we are working with Cavium, Redhat, and Linaro on supporting the AARCH64 port. I'm also unclear whether you are pushing changes back up to OpenJDK for these platforms, or maintaining them locally? I haven't noticed anything (other than build tweaks) and am curious about the release of a Minimal VM for JDK 10 given the Minimal VM effectively died along with the stand-alone Client VM. We push everything upstream. I don't recall seeing anything related to 32-bit ARM, but perhaps it's all in areas I don't see (like compiler and gc). I'm not sure why you believe Minimal VM and Client VM died in OpenJDK 10. From what I remember, there was some decision related to Client VM for Oracle binaries, but support for C1 and Minimal VM was not removed from OpenJDK codebase. This is what I get with BellSoft Liberica binaries built from OpenJDK on Raspberry Pi: Sorry I was mis-remembering. Of course C1 and Minimal still exist in the 32-bit code. Though I would be concerned that with a focus on 64-bit it will be easy for engineers to overlook things that should be inside one of the INCLUDE_XXX guards. (Particularly as we don't do any 32-bit builds and for the most part don't even have the capability to perform them.) Cheers, David pi@rpi-3:~ $ java -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Server VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Minimal VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -client -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Client VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -XX:+PrintFlagsFinal HW | grep C1 bool C1OptimizeVirtualCallProfiling = true {C1 product} {default} bool C1ProfileBranches = true {C1 product} {default} bool C1ProfileCalls = true {C1 product} {default} bool C1ProfileCheckcasts = true {C1 product} {default} bool C1ProfileInlinedCalls = true {C1 product} {default} bool C1ProfileVirtualCalls = true {C1 product} {default} bool C1UpdateMethodData = false {C1 product} {default} bool InlineSynchronizedMethods = true {C1 product} {default} bool LIRFillDelaySlots = false {C1 pd product} {default} bool TimeLinearScan = false {C1 product} {default} bool UseLoopInvariantCodeMotion = true {C1 product} {default} intx ValueMapInitialSize = 11 {C1 product} {default} intx ValueMapMaxLoopSize = 8 {C1 product} {default} Minimal VM and Client VM include C1, and Server VM includes C1 and C2. All (Client VM, Server VM, Minimal VM) were tested and work as expected. For JDK11 you will need to do some work for Condy (if not already done) as well as JFR and Nest-based Access Control (which you can see in the nestmates branch of the Valhalla repo), as you mention below. Not sure what else may be needed. There's been a lot of code refactoring and include file changes that have impact on platform specific code as well. Thanks for the heads-up! -Aleksei Cheers, David On 10/04/2018 5:23 PM, Aleksei Voitylov wrote: Hi David, Speaking about the arm/ port, BellSoft has been releasing JCK-verified binaries (as provided under the OpenJDK license) from the arm/ port for the Raspberry Pi for JDK 9 [1] for a while and recently released one for JDK 10 [2], including OpenJFX and Minimal VM support. On Raspberry Pi 2 (ARMv7) and Raspberry Pi 3 (ARMv8 chip running Raspbian) the binaries produced from this port are passing
Re: RFR: 8201357: ALSA_CFLAGS is needed; was dropped in JDK-8071469
> 10 apr. 2018 kl. 07:46 skrev Martin Buchholz: > > Hi Magnus, please review: > > 8201357: ALSA_CFLAGS is needed; was dropped in JDK-8071469 Oops. Sorry. :( > http://cr.openjdk.java.net/~martin/webrevs/jdk/ALSA_CFLAGS/ Looks good to me. Thank you for fixing this! /Magnus > https://bugs.openjdk.java.net/browse/JDK-8201357 >
Re: Supported platforms
Hi Aleksei, This is all news to me. Good news, but unexpected. As far as I was aware the 32-bit ARM port was dying a slow death and would eventually get removed. 64-bit ARM is of course very much alive and well under the Aarch64 porters - though I'm unclear if you are using that code for ARMv8 or the Oracle contributed code that used to be closed? I'm also unclear whether you are pushing changes back up to OpenJDK for these platforms, or maintaining them locally? I haven't noticed anything (other than build tweaks) and am curious about the release of a Minimal VM for JDK 10 given the Minimal VM effectively died along with the stand-alone Client VM. For JDK11 you will need to do some work for Condy (if not already done) as well as JFR and Nest-based Access Control (which you can see in the nestmates branch of the Valhalla repo), as you mention below. Not sure what else may be needed. There's been a lot of code refactoring and include file changes that have impact on platform specific code as well. Cheers, David On 10/04/2018 5:23 PM, Aleksei Voitylov wrote: Hi David, Speaking about the arm/ port, BellSoft has been releasing JCK-verified binaries (as provided under the OpenJDK license) from the arm/ port for the Raspberry Pi for JDK 9 [1] for a while and recently released one for JDK 10 [2], including OpenJFX and Minimal VM support. On Raspberry Pi 2 (ARMv7) and Raspberry Pi 3 (ARMv8 chip running Raspbian) the binaries produced from this port are passing all the required testing, including the new features recently open-sourced by Oracle (such as AppCDS). As far as JDK 11 is concerned, we are keeping track of the changes, recently fixed a couple of build issues that slipped in [3, 4], are working on Minimal Value Types support and, from what I can tell, will need to look into JFR and Nest-Based Access Control. Please let us know if we missed anything and we need to prepare for some other new features for porting. The intent is to keep the arm/ port in good shape and provide well-tested binaries for the Raspberry Pi. I believed Oracle was aware about BellSoft producing binaries from this port [5], [6]. Based on twitter, it seems like at least some engineers at Redhat and SAP are aware about it. Let me know if there is anything else we need to do to spread the word about it with Oracle engineering. For now, Boris (cced) is the engineer at BellSoft working on supporting the arm/ port for the Raspberry Pi. Other than that, I really wonder what "stepping up to take ownership of a port" means when it's upstream, and there is some procedure we need to follow. Thanks, -Aleksei [1] https://bell-sw.com/java-for-raspberry-pi-9.0.4.html [2] https://bell-sw.com/java-for-raspberry-pi.html [3] https://bugs.openjdk.java.net/browse/JDK-8200628 [4] https://bugs.openjdk.java.net/browse/JDK-8198949 [5] https://twitter.com/java/status/981239157874941955 [6] https://twitter.com/DonaldOJDK/status/981874485979746304 We are in a situation where previously "supported" platforms (by Oracle) are no longer supported as, AFAIK, no one has stepped up to take ownership of said platforms - which is a requirement for getting a new port accepted into mainline. Without such ownership the code may not only bit-rot, it may in time be stripped out completely. Any interested parties would then need to look at (re)forming a port project for that platform to keep it going in OpenJDK (or of course they are free to take it elsewhere). Cheers, David On 09/04/2018 18:35, White, Derek wrote: Hi Magnus, -Original Message- Date: Mon, 9 Apr 2018 09:55:09 +0200 From: Magnus Ihse BursieTo: Simon Nash ,b...@juanantonio.info Cc:build-dev@openjdk.java.net, hotspot-dev developers Subject: Re: Supported platforms Message-ID:<4b1f262d-b9d2-6844-e453-dd53b42b2...@oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Simon, On 2018-04-08 16:30, Simon Nash wrote: On 05/04/2018 02:26,b...@juanantonio.info wrote: Many thanks with the link about the Platforms supported: http://www.oracle.com/technetwork/java/javase/documentation/jdk10cert config-4417031.html This appears to be a list of the platforms that are supported (certified) by Oracle.? Where can I find the list of platforms that are supported by OpenJDK?? For example, what about the following platforms that don't appear on the Oracle list: Windows x86 Linux x86 aarch32 (ARMv7 32-bit) aarch64 (ARMv8 64-bit) Are all these supported for OpenJDK 9, 10 and 11? There is actually no such thing as a "supported OpenJDK platform". While I hope things may change in the future, OpenJDK as an organization does not publicize any list of "supported" platforms. Oracle publishes a list of platforms they support, and I presume that Red Hat and SAP and others do the same, but the OpenJDK project itself does not. With that said, platforms which were