Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v6]
On Fri, 9 Oct 2020 06:02:04 GMT, David Holmes wrote: >> Aleksei Voitylov has updated the pull request with a new target base due to >> a merge or a rebase. The pull request now >> contains three commits: >> - Merge branch 'master' into JDK-8247589 >> - JDK-8247589: Implementation of Alpine Linux/x64 Port >> - JDK-8247589: Implementation of Alpine Linux/x64 Port > > Marked as reviewed by dholmes (Reviewer). Thanks everyone! - PR: https://git.openjdk.java.net/jdk/pull/49
Integrated: 8247591: Document Alpine Linux build steps in OpenJDK build guide
On Mon, 5 Oct 2020 18:13:49 GMT, Aleksei Voitylov wrote: > Please review the build guide update for Alpine Linux. > > building.html was generated by "make update-build-docs". However, this > command produced a slightly different > building.html, with some other sections modified as well. This is probably > due to another version of the tooling > installed. I have manually removed the modifications to other, not relevant > sections of the building.html change > introduced by the tool, and kept only the changes relevant to the md file > change. This pull request has now been integrated. Changeset: 508c8a95 Author:Aleksei Voitylov Committer: Alexander Scherbatiy URL: https://git.openjdk.java.net/jdk/commit/508c8a95 Stats: 51 lines in 2 files changed: 51 ins; 0 del; 0 mod 8247591: Document Alpine Linux build steps in OpenJDK build guide Co-authored-by: Aleksei Voitylov Reviewed-by: erikj - PR: https://git.openjdk.java.net/jdk/pull/512
Integrated: JDK-8247589: Implementation of Alpine Linux/x64 Port
On Mon, 7 Sep 2020 11:23:28 GMT, Aleksei Voitylov wrote: > continuing the review thread from here > https://mail.openjdk.java.net/pipermail/core-libs-dev/2020-September/068546.html > >> The download side of using JNI in these tests is that it complicates the >> setup a bit for those that run jtreg directly and/or just build the JDK >> and not the test libraries. You could reduce this burden a bit by >> limiting the load library/isMusl check to Linux only, meaning isMusl >> would not be called on other platforms. >> >> The alternative you suggest above might indeed be better. I assume you >> don't mean splitting the tests but rather just adding a second @test >> description so that the vm.musl case runs the test with a system >> property that allows the test know the expected load library path behavior. > > I have updated the PR to split the two tests in multiple @test s. > >> The updated comment in java_md.c in this looks good. A minor comment on >> Platform.isBusybox is Files.isSymbolicLink returning true implies that >> the link exists so no need to check for exists too. Also the >> if-then-else style for the new class in ProcessBuilder/Basic.java is >> inconsistent with the rest of the test so it stands out. > > Thank you, these changes are done in the updated PR. > >> Given the repo transition this weekend then I assume you'll create a PR >> for the final review at least. Also I see JEP 386 hasn't been targeted >> yet but I assume Boris, as owner, will propose-to-target and wait for it >> to be targeted before it is integrated. > > Yes. How can this be best accomplished with the new git workflow? > - we can continue the review process till the end and I will request the > integration to happen only after the JEP is > targeted. I guess this step is now done by typing "slash integrate" in a > comment. > - we can pause the review process now until the JEP is targeted. > > In the first case I'm kindly asking the Reviewers who already chimed in on > that to re-confirm the review here. This pull request has now been integrated. Changeset: 63009f90 Author:Aleksei Voitylov Committer: Alexander Scherbatiy URL: https://git.openjdk.java.net/jdk/commit/63009f90 Stats: 403 lines in 30 files changed: 348 ins; 17 del; 38 mod 8247589: Implementation of Alpine Linux/x64 Port Co-authored-by: Mikael Vidstedt Co-authored-by: Alexander Scherbatiy Co-authored-by: Axel Siebenborn Co-authored-by: Aleksei Voitylov Reviewed-by: alanb, erikj, dholmes - PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v2]
On Tue, 6 Oct 2020 02:00:06 GMT, David Holmes wrote: >> I added the contributors that could be found in the portola project commits. >> If anyone knows some other contributors I >> missed, I'll be happy to stand corrected. > > @voitylov For future reference please don't force-push commits on open PRs as > it breaks the commit history. I can no > longer just look at the two most recent commits and see what they added > relative to what I had previously reviewed. > Thanks. @dholmes-ora yes, sorry about that. I updated the branch to pull the recent changes to enable pre-submit tests and ensure everything is well before integration and though the branch looked good it confused the pull request. So I had to force push it back to the original state. - PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v2]
On Thu, 8 Oct 2020 10:58:56 GMT, Aleksei Voitylov wrote: >> @voitylov For future reference please don't force-push commits on open PRs >> as it breaks the commit history. I can no >> longer just look at the two most recent commits and see what they added >> relative to what I had previously reviewed. >> Thanks. > > @dholmes-ora yes, sorry about that. I updated the branch to pull the recent > changes to enable pre-submit tests and > ensure everything is well before integration and though the branch looked > good it confused the pull request. So I had > to force push it back to the original state. @iignatev I resolved the conflict in whitebox.cpp and fixed a minor style nit on the way. Could you take a look? - PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v6]
> continuing the review thread from here > https://mail.openjdk.java.net/pipermail/core-libs-dev/2020-September/068546.html > >> The download side of using JNI in these tests is that it complicates the >> setup a bit for those that run jtreg directly and/or just build the JDK >> and not the test libraries. You could reduce this burden a bit by >> limiting the load library/isMusl check to Linux only, meaning isMusl >> would not be called on other platforms. >> >> The alternative you suggest above might indeed be better. I assume you >> don't mean splitting the tests but rather just adding a second @test >> description so that the vm.musl case runs the test with a system >> property that allows the test know the expected load library path behavior. > > I have updated the PR to split the two tests in multiple @test s. > >> The updated comment in java_md.c in this looks good. A minor comment on >> Platform.isBusybox is Files.isSymbolicLink returning true implies that >> the link exists so no need to check for exists too. Also the >> if-then-else style for the new class in ProcessBuilder/Basic.java is >> inconsistent with the rest of the test so it stands out. > > Thank you, these changes are done in the updated PR. > >> Given the repo transition this weekend then I assume you'll create a PR >> for the final review at least. Also I see JEP 386 hasn't been targeted >> yet but I assume Boris, as owner, will propose-to-target and wait for it >> to be targeted before it is integrated. > > Yes. How can this be best accomplished with the new git workflow? > - we can continue the review process till the end and I will request the > integration to happen only after the JEP is > targeted. I guess this step is now done by typing "slash integrate" in a > comment. > - we can pause the review process now until the JEP is targeted. > > In the first case I'm kindly asking the Reviewers who already chimed in on > that to re-confirm the review here. Aleksei Voitylov has updated the pull request with a new target base due to a merge or a rebase. The pull request now contains three commits: - Merge branch 'master' into JDK-8247589 - JDK-8247589: Implementation of Alpine Linux/x64 Port - JDK-8247589: Implementation of Alpine Linux/x64 Port - Changes: https://git.openjdk.java.net/jdk/pull/49/files Webrev: https://webrevs.openjdk.java.net/?repo=jdk=49=05 Stats: 403 lines in 30 files changed: 348 ins; 17 del; 38 mod Patch: https://git.openjdk.java.net/jdk/pull/49.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/49/head:pull/49 PR: https://git.openjdk.java.net/jdk/pull/49
RFR: 8247591: Document Alpine Linux build steps in OpenJDK build guide
Please review the build guide update for Alpine Linux. building.html was generated by "make update-build-docs". However, this command produced a slightly different building.html, with some other sections modified as well. This is probably due to another version of the tooling installed. I have manually removed the modifications to other, not relevant sections of the building.html change introduced by the tool, and kept only the changes relevant to the md file change. - Commit messages: - JDK-8247591: Document Alpine Linux build steps in OpenJDK build guide Changes: https://git.openjdk.java.net/jdk/pull/512/files Webrev: https://webrevs.openjdk.java.net/?repo=jdk=512=00 Issue: https://bugs.openjdk.java.net/browse/JDK-8247591 Stats: 51 lines in 2 files changed: 51 ins; 0 del; 0 mod Patch: https://git.openjdk.java.net/jdk/pull/512.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/512/head:pull/512 PR: https://git.openjdk.java.net/jdk/pull/512
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v5]
> continuing the review thread from here > https://mail.openjdk.java.net/pipermail/core-libs-dev/2020-September/068546.html > >> The download side of using JNI in these tests is that it complicates the >> setup a bit for those that run jtreg directly and/or just build the JDK >> and not the test libraries. You could reduce this burden a bit by >> limiting the load library/isMusl check to Linux only, meaning isMusl >> would not be called on other platforms. >> >> The alternative you suggest above might indeed be better. I assume you >> don't mean splitting the tests but rather just adding a second @test >> description so that the vm.musl case runs the test with a system >> property that allows the test know the expected load library path behavior. > > I have updated the PR to split the two tests in multiple @test s. > >> The updated comment in java_md.c in this looks good. A minor comment on >> Platform.isBusybox is Files.isSymbolicLink returning true implies that >> the link exists so no need to check for exists too. Also the >> if-then-else style for the new class in ProcessBuilder/Basic.java is >> inconsistent with the rest of the test so it stands out. > > Thank you, these changes are done in the updated PR. > >> Given the repo transition this weekend then I assume you'll create a PR >> for the final review at least. Also I see JEP 386 hasn't been targeted >> yet but I assume Boris, as owner, will propose-to-target and wait for it >> to be targeted before it is integrated. > > Yes. How can this be best accomplished with the new git workflow? > - we can continue the review process till the end and I will request the > integration to happen only after the JEP is > targeted. I guess this step is now done by typing "slash integrate" in a > comment. > - we can pause the review process now until the JEP is targeted. > > In the first case I'm kindly asking the Reviewers who already chimed in on > that to re-confirm the review here. Aleksei Voitylov has refreshed the contents of this pull request, and previous commits have been removed. The incremental views will show differences compared to the previous content of the PR. - Changes: - all: https://git.openjdk.java.net/jdk/pull/49/files - new: https://git.openjdk.java.net/jdk/pull/49/files/5feda5ff..b7ffed87 Webrevs: - full: https://webrevs.openjdk.java.net/?repo=jdk=49=04 - incr: https://webrevs.openjdk.java.net/?repo=jdk=49=03-04 Stats: 1 line in 1 file changed: 0 ins; 0 del; 1 mod Patch: https://git.openjdk.java.net/jdk/pull/49.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/49/head:pull/49 PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v4]
> continuing the review thread from here > https://mail.openjdk.java.net/pipermail/core-libs-dev/2020-September/068546.html > >> The download side of using JNI in these tests is that it complicates the >> setup a bit for those that run jtreg directly and/or just build the JDK >> and not the test libraries. You could reduce this burden a bit by >> limiting the load library/isMusl check to Linux only, meaning isMusl >> would not be called on other platforms. >> >> The alternative you suggest above might indeed be better. I assume you >> don't mean splitting the tests but rather just adding a second @test >> description so that the vm.musl case runs the test with a system >> property that allows the test know the expected load library path behavior. > > I have updated the PR to split the two tests in multiple @test s. > >> The updated comment in java_md.c in this looks good. A minor comment on >> Platform.isBusybox is Files.isSymbolicLink returning true implies that >> the link exists so no need to check for exists too. Also the >> if-then-else style for the new class in ProcessBuilder/Basic.java is >> inconsistent with the rest of the test so it stands out. > > Thank you, these changes are done in the updated PR. > >> Given the repo transition this weekend then I assume you'll create a PR >> for the final review at least. Also I see JEP 386 hasn't been targeted >> yet but I assume Boris, as owner, will propose-to-target and wait for it >> to be targeted before it is integrated. > > Yes. How can this be best accomplished with the new git workflow? > - we can continue the review process till the end and I will request the > integration to happen only after the JEP is > targeted. I guess this step is now done by typing "slash integrate" in a > comment. > - we can pause the review process now until the JEP is targeted. > > In the first case I'm kindly asking the Reviewers who already chimed in on > that to re-confirm the review here. Aleksei Voitylov has updated the pull request incrementally with one additional commit since the last revision: test2 - Changes: - all: https://git.openjdk.java.net/jdk/pull/49/files - new: https://git.openjdk.java.net/jdk/pull/49/files/705b8555..5feda5ff Webrevs: - full: https://webrevs.openjdk.java.net/?repo=jdk=49=03 - incr: https://webrevs.openjdk.java.net/?repo=jdk=49=02-03 Stats: 1 line in 1 file changed: 0 ins; 0 del; 1 mod Patch: https://git.openjdk.java.net/jdk/pull/49.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/49/head:pull/49 PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v3]
> continuing the review thread from here > https://mail.openjdk.java.net/pipermail/core-libs-dev/2020-September/068546.html > >> The download side of using JNI in these tests is that it complicates the >> setup a bit for those that run jtreg directly and/or just build the JDK >> and not the test libraries. You could reduce this burden a bit by >> limiting the load library/isMusl check to Linux only, meaning isMusl >> would not be called on other platforms. >> >> The alternative you suggest above might indeed be better. I assume you >> don't mean splitting the tests but rather just adding a second @test >> description so that the vm.musl case runs the test with a system >> property that allows the test know the expected load library path behavior. > > I have updated the PR to split the two tests in multiple @test s. > >> The updated comment in java_md.c in this looks good. A minor comment on >> Platform.isBusybox is Files.isSymbolicLink returning true implies that >> the link exists so no need to check for exists too. Also the >> if-then-else style for the new class in ProcessBuilder/Basic.java is >> inconsistent with the rest of the test so it stands out. > > Thank you, these changes are done in the updated PR. > >> Given the repo transition this weekend then I assume you'll create a PR >> for the final review at least. Also I see JEP 386 hasn't been targeted >> yet but I assume Boris, as owner, will propose-to-target and wait for it >> to be targeted before it is integrated. > > Yes. How can this be best accomplished with the new git workflow? > - we can continue the review process till the end and I will request the > integration to happen only after the JEP is > targeted. I guess this step is now done by typing "slash integrate" in a > comment. > - we can pause the review process now until the JEP is targeted. > > In the first case I'm kindly asking the Reviewers who already chimed in on > that to re-confirm the review here. Aleksei Voitylov has updated the pull request with a new target base due to a merge or a rebase. The incremental webrev excludes the unrelated changes brought in by the merge/rebase. The pull request contains five additional commits since the last revision: - Merge branch 'JDK-8247589' of https://github.com/voitylov/jdk into JDK-8247589 - JDK-8247589: Implementation of Alpine Linux/x64 Port - JDK-8247589: Implementation of Alpine Linux/x64 Port - JDK-8247589: Implementation of Alpine Linux/x64 Port - JDK-8247589: Implementation of Alpine Linux/x64 Port - Changes: - all: https://git.openjdk.java.net/jdk/pull/49/files - new: https://git.openjdk.java.net/jdk/pull/49/files/d5994cb5..705b8555 Webrevs: - full: https://webrevs.openjdk.java.net/?repo=jdk=49=02 - incr: https://webrevs.openjdk.java.net/?repo=jdk=49=01-02 Stats: 73505 lines in 3006 files changed: 26172 ins; 37386 del; 9947 mod Patch: https://git.openjdk.java.net/jdk/pull/49.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/49/head:pull/49 PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v2]
On Mon, 14 Sep 2020 06:30:50 GMT, Aleksei Voitylov wrote: >> Marked as reviewed by dholmes (Reviewer). > > thank you Alan, Erik, and David! When the JEP becomes Targeted, I'll use this > PR to integrate the changes. I added the contributors that could be found in the portola project commits. If anyone knows some other contributors I missed, I'll be happy to stand corrected. - PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v2]
On Mon, 14 Sep 2020 04:18:39 GMT, David Holmes wrote: >> Aleksei Voitylov has updated the pull request incrementally with one >> additional commit since the last revision: >> >> JDK-8247589: Implementation of Alpine Linux/x64 Port > > Marked as reviewed by dholmes (Reviewer). thank you Alan, Erik, and David! When the JEP becomes Targeted, I'll use this PR to integrate the changes. - PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v2]
On Tue, 8 Sep 2020 23:44:58 GMT, David Holmes wrote: >> Aleksei Voitylov has updated the pull request incrementally with one >> additional commit since the last revision: >> >> JDK-8247589: Implementation of Alpine Linux/x64 Port > > make/autoconf/platform.m4 line 536: > >> 534: AC_SUBST(HOTSPOT_$1_CPU_DEFINE) >> 535: >> 536: if test "x$OPENJDK_$1_LIBC" = "xmusl"; then > > I'm not clear why we only check for musl when setting the HOTSPOT_$1_LIBC > variable this check is removed in the updated version. As a consequence, LIBC variable is added to the release file for all platforms, which is probably a good thing. > src/hotspot/os/linux/os_linux.cpp line 624: > >> 622: // confstr() from musl libc returns EINVAL for >> 623: // _CS_GNU_LIBC_VERSION and _CS_GNU_LIBPTHREAD_VERSION >> 624: os::Linux::set_libc_version("unknown"); > > This should be "musl - unknown" as we don't know an exact version but we do > know that it is musl. Right, this should be more consistent with glibc which here returns name and version. Updated as suggested. > src/hotspot/os/linux/os_linux.cpp line 625: > >> 623: // _CS_GNU_LIBC_VERSION and _CS_GNU_LIBPTHREAD_VERSION >> 624: os::Linux::set_libc_version("unknown"); >> 625: os::Linux::set_libpthread_version("unknown"); > > This should be "musl - unknown" as we don't know an exact version but we do > know that it is musl. The pthread version is also updated to "musl - unknown". Reason being, pthread functionality for musl is built into the library. > src/hotspot/share/runtime/abstract_vm_version.cpp line 263: > >> 261: #define LIBC_STR "-" XSTR(LIBC) >> 262: #else >> 263: #define LIBC_STR "" > > Again I'm not clear why we do nothing in the non-musl case? Shouldn't we be > reporting glibc or musl? Unlike the case above, I think it's best to keep it as is. I'd expect there to be a bunch of scripts in the wild which parse it and may get broken when facing a triplet for existing platforms. > src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c line 284: > >> 282: // To improve portability across platforms and avoid conflicts >> 283: // between GNU and XSI versions of strerror_r, plain strerror is >> used. >> 284: // It's safe because this code is not used in any multithreaded >> environment. > > I still question this assertion. The issue is not that the current code path > that leads to strerror use may be executed > concurrently but that any other strerror use could be concurrent with this > one. I would consider this a "must fix" if > not for the fact we already use strerror in the code and so this doesn't > really change the exposure to the problem. You are right! The updated version #ifdefs the XSI or GNU versions of strerror_r in this place. Note to self: file a bug to address the usage of strerror in other places, at least for HotSpot. > test/hotspot/jtreg/runtime/StackGuardPages/exeinvoke.c line 282: > >> 280: >> 281: pthread_attr_init(_attr); >> 282: pthread_attr_setstacksize(_attr, stack_size); > > Just a comment in response to the explanation as to why this change is > needed. If the default thread stacksize under > musl is insufficient to successfully attach such a thread to the VM then this > will cause problems for applications that > embed the VM directly (or which otherwise directly attach existing threads). This fix https://git.musl-libc.org/cgit/musl/commit/src/aio/aio.c?id=1a6d6f131bd60ec2a858b34100049f0c042089f2 addresses the problem for recent versions of musl. The test passes on a recent Alpine Linux 3.11.6 (musl 1.1.24) and fails on Alpine Linux 3.8.2 (musl 1.1.19) without this test fix. There are still older versions of the library in the wild, hence the test fix. The mitigation for such users would be a distro upgrade. > test/hotspot/jtreg/runtime/TLS/exestack-tls.c line 60: > >> 58: } >> 59: >> 60: #if defined(__GLIBC) > > Why do we use this form here but at line 30 we have: > #ifdef __GLIBC__ > ? Fixed to be consistent. - PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port [v2]
> continuing the review thread from here > https://mail.openjdk.java.net/pipermail/core-libs-dev/2020-September/068546.html > >> The download side of using JNI in these tests is that it complicates the >> setup a bit for those that run jtreg directly and/or just build the JDK >> and not the test libraries. You could reduce this burden a bit by >> limiting the load library/isMusl check to Linux only, meaning isMusl >> would not be called on other platforms. >> >> The alternative you suggest above might indeed be better. I assume you >> don't mean splitting the tests but rather just adding a second @test >> description so that the vm.musl case runs the test with a system >> property that allows the test know the expected load library path behavior. > > I have updated the PR to split the two tests in multiple @test s. > >> The updated comment in java_md.c in this looks good. A minor comment on >> Platform.isBusybox is Files.isSymbolicLink returning true implies that >> the link exists so no need to check for exists too. Also the >> if-then-else style for the new class in ProcessBuilder/Basic.java is >> inconsistent with the rest of the test so it stands out. > > Thank you, these changes are done in the updated PR. > >> Given the repo transition this weekend then I assume you'll create a PR >> for the final review at least. Also I see JEP 386 hasn't been targeted >> yet but I assume Boris, as owner, will propose-to-target and wait for it >> to be targeted before it is integrated. > > Yes. How can this be best accomplished with the new git workflow? > - we can continue the review process till the end and I will request the > integration to happen only after the JEP is > targeted. I guess this step is now done by typing "slash integrate" in a > comment. > - we can pause the review process now until the JEP is targeted. > > In the first case I'm kindly asking the Reviewers who already chimed in on > that to re-confirm the review here. Aleksei Voitylov has updated the pull request incrementally with one additional commit since the last revision: JDK-8247589: Implementation of Alpine Linux/x64 Port - Changes: - all: https://git.openjdk.java.net/jdk/pull/49/files - new: https://git.openjdk.java.net/jdk/pull/49/files/f61f546a..d5994cb5 Webrevs: - full: https://webrevs.openjdk.java.net/?repo=jdk=49=01 - incr: https://webrevs.openjdk.java.net/?repo=jdk=49=00-01 Stats: 19 lines in 4 files changed: 7 ins; 4 del; 8 mod Patch: https://git.openjdk.java.net/jdk/pull/49.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/49/head:pull/49 PR: https://git.openjdk.java.net/jdk/pull/49
RFR: JDK-8247589: Implementation of Alpine Linux/x64 Port
continuing the review thread from here https://mail.openjdk.java.net/pipermail/core-libs-dev/2020-September/068546.html > The download side of using JNI in these tests is that it complicates the > setup a bit for those that run jtreg directly and/or just build the JDK > and not the test libraries. You could reduce this burden a bit by > limiting the load library/isMusl check to Linux only, meaning isMusl > would not be called on other platforms. > > The alternative you suggest above might indeed be better. I assume you > don't mean splitting the tests but rather just adding a second @test > description so that the vm.musl case runs the test with a system > property that allows the test know the expected load library path behavior. I have updated the PR to split the two tests in multiple @test s. > The updated comment in java_md.c in this looks good. A minor comment on > Platform.isBusybox is Files.isSymbolicLink returning true implies that > the link exists so no need to check for exists too. Also the > if-then-else style for the new class in ProcessBuilder/Basic.java is > inconsistent with the rest of the test so it stands out. Thank you, these changes are done in the updated PR. > Given the repo transition this weekend then I assume you'll create a PR > for the final review at least. Also I see JEP 386 hasn't been targeted > yet but I assume Boris, as owner, will propose-to-target and wait for it > to be targeted before it is integrated. Yes. How can this be best accomplished with the new git workflow? - we can continue the review process till the end and I will request the integration to happen only after the JEP is targeted. I guess this step is now done by typing "slash integrate" in a comment. - we can pause the review process now until the JEP is targeted. In the first case I'm kindly asking the Reviewers who already chimed in on that to re-confirm the review here. - Commit messages: - JDK-8247589: Implementation of Alpine Linux/x64 Port Changes: https://git.openjdk.java.net/jdk/pull/49/files Webrev: https://webrevs.openjdk.java.net/?repo=jdk=49=00 Issue: https://bugs.openjdk.java.net/browse/JDK-8247589 Stats: 403 lines in 30 files changed: 346 ins; 17 del; 40 mod Patch: https://git.openjdk.java.net/jdk/pull/49.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/49/head:pull/49 PR: https://git.openjdk.java.net/jdk/pull/49
Re: RFC: 8229469 JEP 386: Alpine Linux/x64 Port
Alan, here is a simpler version which, for the two tests in question, refers to a local helper class with a native method: http://cr.openjdk.java.net/~avoitylov/webrev.8247589.07/ If there is a preference to avoid JNI, there is yet another alternative: split the two launcher tests in question into tests with @requires vm.musl | os.family = "aix" and with @requires !vm.musl & os.family != "aix". -Aleksei On 04/09/2020 15:51, Aleksei Voitylov wrote: > Alan, > > in this case I'm leaning towards a new class jdk.test.lib.LibcHelper > with a native implementation which calls (*env)->NewStringUTF(env, > libc), which will be used by the tests which require it. Otherwise we'd > have to specify nativepath for all tests which use > jdk.test.lib.Platform. What do you think? > > -Aleksei > > On 04/09/2020 12:08, Alan Bateman wrote: >> On 04/09/2020 10:00, Alan Bateman wrote: >>> On 04/09/2020 08:55, Aleksei Voitylov wrote: >>>> : >>>> Tests tools/launcher/Test7029048.java and >>>> tools/launcher/ExecutionEnvironment.java need to change behavior on >>>> systems that alter LD_LIBRARY_PATH (AIX and musl). The alternative we >>>> first considered was to detect the libc of the OS with ldd in the test >>>> and avoid WhiteBox dependency. This approach has a significant >>>> drawback: >>>> some distributions bundle glibc and OpenJDK and launch it on a >>>> musl-based Linux OS, so we really need to detect the libc the JDK was >>>> compiled for, not the default libc present in the OS. To avoid such >>>> problems all together it was decided to detect the libc flavor the JDK >>>> was compiled for through WhiteBox. >>>> >>> I really dislike the changes to the launcher tests and I don't think >>> the WB API is the right place for this. I think we need to find >>> something cleaner and maybe expose it as a method on >>> jdk.test.lib.Platform. >>> >> or alternatively, a new class in jdk.test.lib that gives provide >> methods to introspect the runtime, whatever is simplest and doesn't >> depend on the WB API as it's independent of HotSpot. >> >> -Alan
Re: RFC: 8229469 JEP 386: Alpine Linux/x64 Port
Alan, in this case I'm leaning towards a new class jdk.test.lib.LibcHelper with a native implementation which calls (*env)->NewStringUTF(env, libc), which will be used by the tests which require it. Otherwise we'd have to specify nativepath for all tests which use jdk.test.lib.Platform. What do you think? -Aleksei On 04/09/2020 12:08, Alan Bateman wrote: > On 04/09/2020 10:00, Alan Bateman wrote: >> On 04/09/2020 08:55, Aleksei Voitylov wrote: >>> : >>> Tests tools/launcher/Test7029048.java and >>> tools/launcher/ExecutionEnvironment.java need to change behavior on >>> systems that alter LD_LIBRARY_PATH (AIX and musl). The alternative we >>> first considered was to detect the libc of the OS with ldd in the test >>> and avoid WhiteBox dependency. This approach has a significant >>> drawback: >>> some distributions bundle glibc and OpenJDK and launch it on a >>> musl-based Linux OS, so we really need to detect the libc the JDK was >>> compiled for, not the default libc present in the OS. To avoid such >>> problems all together it was decided to detect the libc flavor the JDK >>> was compiled for through WhiteBox. >>> >> I really dislike the changes to the launcher tests and I don't think >> the WB API is the right place for this. I think we need to find >> something cleaner and maybe expose it as a method on >> jdk.test.lib.Platform. >> > or alternatively, a new class in jdk.test.lib that gives provide > methods to introspect the runtime, whatever is simplest and doesn't > depend on the WB API as it's independent of HotSpot. > > -Alan
Re: RFC: 8229469 JEP 386: Alpine Linux/x64 Port
Hi Erik, Magnus, Mikael, Nick, David, and Alan, thank you for looking into it. I grouped together all the comments in one response to avoid polluting the mailing lists. Here is an updated version of the patch which addresses the comments: http://cr.openjdk.java.net/~avoitylov/webrev.8247589.06/ Please also find inline answers to the comments by Mikael, Nick, Alan and David, in order. Testing is in progress. [Mikael] > WB_GetLibcName now returns “glibc” by default (unless MUSL_LIBC is > defined) which means it may return “glibc” on platforms where the > default library really isn’t GNU libc. I will work just fine for what > it’s currently being used for (isMusl), but is potentially a bit > misleading. I agree. In the updated version WB_GetLibcName returns the LIBC the JDK is build for. [Nick] > I see the JEP only mentions support for x86_64, so maybe this is out of > scope, but with this trivial change to os_linux_aarch64.cpp your patch > works on Alpine/AArch64 too: I planned to add additional platforms it a follow-up enhancement but updated the webrev to include AArch64 now. Testing is in progress, and, if the same level of quality is reached as on x64, I will slightly update the JEP to make it clear AArch64 is also known to work. I hope this will be fine. [Alan] > .02, .03, .04 seem to be older and seem to include the changes to > JDK-8252248 that Alexander Scherbatiy had out for review separately so > I'll ignore those and just look at .01, I hope that is right. This is correct. > I agree with David that the comment in java_md.c is excessive and > unnecessary. Fixed in the updated version. > > The WB API is mostly for the hotspot tests and looks a bit strange to > be using it in the launcher tests to check for musl. Have you look at > alternatives? The changes to the Process test to check for busybox is > okay. The WhiteBox API is used in JDK tests for JFR [1], CollectionUsageThreshold.java [2], TimSortStackSize2.java [3], so we are not adding an extra dependency. Tests tools/launcher/Test7029048.java and tools/launcher/ExecutionEnvironment.java need to change behavior on systems that alter LD_LIBRARY_PATH (AIX and musl). The alternative we first considered was to detect the libc of the OS with ldd in the test and avoid WhiteBox dependency. This approach has a significant drawback: some distributions bundle glibc and OpenJDK and launch it on a musl-based Linux OS, so we really need to detect the libc the JDK was compiled for, not the default libc present in the OS. To avoid such problems all together it was decided to detect the libc flavor the JDK was compiled for through WhiteBox. [David] > src/hotspot/os/linux/os_linux.cpp > > 624 os::Linux::set_libc_version("unknown"); > 625 os::Linux::set_libpthread_version("unknown"); > > Surely we can do better than "unknown"? Surely different releases of > Alpine linux must identify different version of the libraries? The author of musl indicated it currently does not provide a way to obtain the library version programmatically [4]. > The PaX related error message belongs in release notes not the source > code - the error message should be much more succint: > > "Failed to mark memory page as executable - check if grsecurity/PaX is > enabled" Fixed. > src/hotspot/os/linux/os_linux.hpp > > numa_node_to_cpus is too big to be in the header file now. Fixed. > src/hotspot/share/prims/whitebox.cpp > > I would expect this to use the version string set in os_linux.cpp. In the updated version of the patch the libc variant now comes from the build system. The libc version would probably be unused at this point in WhiteBox, also see answer to your first comment. > src/hotspot/share/runtime/abstract_vm_version.cpp > > Ideally LIBC_STR would come from the build system - as we do for > FLOAT_ARCH. See flags.m4. Yes, thank you for the suggestion. It now comes from the build system, as above. > src/java.base/unix/native/libjli/java_md.c > > The comment is way too long - see the AIX case below it. :) OK, trimmed it down :) > src/jdk.hotspot.agent/linux/native/libsaproc/ps_proc.c > > 282 // To improve portability across platforms and avoid conflicts > 283 // between GNU and XSI versions of strerror_r, plain strerror > is used. > 284 // It's safe because this code is not used in any > multithreaded environment. > > It is not at all clear to me that this code is not used in a > multi-threaded environment. The VM is always multi-threaded. This > usage was added here: > > http://mail.openjdk.java.net/pipermail/serviceability-dev/2015-December/018419.html > > > But this code also uses strerror in another place (as does other code) > so ... I checked LinuxDebuggerLocal.java and all attach0 calls using this functionality are guarded by corresponding syncronized constructs. That's because nobody claims that ptrace_attach is thread-safe or re-enterant. It depends to kernel implementation and attempt to use it from multiple
RFC: 8229469 JEP 386: Alpine Linux/x64 Port
Hi, JEP 386 is now Candidate [1] and as we resolved all outstanding issues within the Portola project I'd like to ask for comments from HotSpot, Core Libs (changes in libjli/java_md.c), and Build groups before moving the JEP to Proposed to Target: http://cr.openjdk.java.net/~avoitylov/webrev.8247589.01/ Testing: JTReg, VM TestBase on all platforms (Linux x86_64, Linux x86, Linux Arm, Linux AArch64, Linux PPC, Windows x86_64, Windows x86, Mac) with no regressions. A downport of these changes to 14 passes JCK 14 on these platforms. The port was tested on Alpine Linux 3.8 and 3.11 with JTReg, VM TestBase. There are no differences with Linux x86_64 with the exception of SA which is not supported as per the JEP. A downport of these changes to 14 passes JCK 14 on Alpine Linux. Thanks, -Aleksei [1] https://bugs.openjdk.java.net/browse/JDK-8229469
Re: serviceability agent : problems when using gcc LTO (link time optimization)
Volker, not a full answer, but here is some static size stats: Server x86_64 AArch64 regular 23M 20M lto 17M 14M Minimal x86_64 AArch64 regular 4.9M 3.9M lto 4.7M 3.6M -Aleksei On 15/01/2020 16:40, Volker Simonis wrote: > While we are speaking about all the drawbacks of LTO, it's still not > clear what the benefits are? In the very first mail Matthias mentioned > that there might be performance improvements but that performance is > not the main driving factor behind this initiative. So is it the > reduced code size (Matthias mentioned something around ~10%)? > > It would be nice to see some real numbers on various platform for > both, the performance improvements for native parts like JIT/GC as > well as for the size reduction. > > Aleksei Voitylov <mailto:aleksei.voity...@bell-sw.com>> schrieb am Di., 14. Jan. 2020, > 09:54: > > > On 14/01/2020 19:57, Baesken, Matthias wrote: > > Hello Magnus and Aleksei, thanks for the input . > > > > The times you provided really look like they make a big > difference at least for people often building minimal-vm . > > Guess I have to measure myself a bit (maybe the difference is > not that big on our linux s390x / ppc64(le) ) . > > > >> If the change to enable lto by default is proposed, what would > be the > >> recommended strategy for development? > >> > > Probably we should a) do not enable it by default but just > make sure it can be enabled easily and works for the minimal-vm > That would be welcome. I have high hopes to LTO the VM some time by > default, and the tendency observed is that the compiler time overhead > for GCC becomes smaller. At the same time there is no reason why > vendors > that invested in testing and can absorb the build time hit could > provide > binaries with LTO built VMs by passing an additional option flag. > > or b) take it easy to disable it for local development. > > > > Best regards, Matthias > > > > > > > >> Magnus, Matthias, > >> > >> for me, lto is a little heavyweight for development. x86_64 > build time > >> with gcc 7: > >> > >> Server 1m32.484s > >> Server+Minimal 1m42.166s > >> Server+Minimal (--with-jvm-features="link-time-opt") 5m29.422s > >> > >> If the change to enable lto by default is proposed, what would > be the > >> recommended strategy for development? > >> > >> For ARM32 Minimal, please keep in mind that it's not uncommon > to disable > >> LTO plugin in commodity ARM32 gcc compiler distributions, so > for some it > >> does not matter what settings we have in OpenJDK. I believe > there could > >> be other reasons for that on top of build time (bugs?). > >> >
Re: serviceability agent : problems when using gcc LTO (link time optimization)
On 14/01/2020 19:57, Baesken, Matthias wrote: > Hello Magnus and Aleksei, thanks for the input . > > The times you provided really look like they make a big difference at least > for people often building minimal-vm . > Guess I have to measure myself a bit (maybe the difference is not that big > on our linux s390x / ppc64(le) ) . > >> If the change to enable lto by default is proposed, what would be the >> recommended strategy for development? >> > Probably we should a) do not enable it by default but just make sure it > can be enabled easily and works for the minimal-vm That would be welcome. I have high hopes to LTO the VM some time by default, and the tendency observed is that the compiler time overhead for GCC becomes smaller. At the same time there is no reason why vendors that invested in testing and can absorb the build time hit could provide binaries with LTO built VMs by passing an additional option flag. > or b) take it easy to disable it for local development. > > Best regards, Matthias > > > >> Magnus, Matthias, >> >> for me, lto is a little heavyweight for development. x86_64 build time >> with gcc 7: >> >> Server 1m32.484s >> Server+Minimal 1m42.166s >> Server+Minimal (--with-jvm-features="link-time-opt") 5m29.422s >> >> If the change to enable lto by default is proposed, what would be the >> recommended strategy for development? >> >> For ARM32 Minimal, please keep in mind that it's not uncommon to disable >> LTO plugin in commodity ARM32 gcc compiler distributions, so for some it >> does not matter what settings we have in OpenJDK. I believe there could >> be other reasons for that on top of build time (bugs?). >>
Re: serviceability agent : problems when using gcc LTO (link time optimization)
Magnus, Matthias, for me, lto is a little heavyweight for development. x86_64 build time with gcc 7: Server 1m32.484s Server+Minimal 1m42.166s Server+Minimal (--with-jvm-features="link-time-opt") 5m29.422s If the change to enable lto by default is proposed, what would be the recommended strategy for development? For ARM32 Minimal, please keep in mind that it's not uncommon to disable LTO plugin in commodity ARM32 gcc compiler distributions, so for some it does not matter what settings we have in OpenJDK. I believe there could be other reasons for that on top of build time (bugs?). -Aleksei On 14/01/2020 17:04, Magnus Ihse Bursie wrote: > On 2020-01-14 13:49, Baesken, Matthias wrote: >> >> Hi Magnus, thanks for the info , I already noticed yesterday the >> setting for arm-32 in the minimal build . >> >> Do you think we could set it too for the other Linux platforms in >> the minimal build ( serviceability agent is not supported there as >> well so the observed issue wouldn’t be a problem). >> > > You mean if you could enable it on your builds without any issues? I'd > guess so, but I don't know. Just try it: > --with-jvm-features="link-time-opt". > > If you mean that it should be turned on by default on minimal builds > for all platforms? No, I don't think that's a good idea. The link time > is really a killer. I remember arm-32 going from like a couple of > minutes to half an hour for linking libjvm.so. > > Things might be different with gold, though. I know they have done > work with at least some kind of "lightweight" LTO, that might be worth > at least looking into. > > /Magnus > > >> Best regards, Matthias >> >> On 2020-01-10 11:01, Baesken, Matthias wrote: >> >> Hello, I recently looked into the gcc lto optimization mode >> (see for some >> detailshttps://gcc.gnu.org/onlinedocs/gccint/LTO-Overview.html >> andhttp://hubicka.blogspot.com/2019/05/gcc-9-link-time-and-inter-procedural.html >> ). >> >> This mode can lead to more compact binaries (~10% smaller) , it >> also might bring small performance improvements but that wasn't my >> (main) goal . >> >> The changes for this are rather small , one needs to use a recent >> gcc , add -flto to the compile flags , for example >> >> --- a/make/autoconf/flags-cflags.m4 Wed Jan 01 03:08:45 2020 >> +0100 >> >> +++ b/make/autoconf/flags-cflags.m4 Wed Jan 08 17:39:10 2020 +0100 >> >> @@ -530,8 +530,13 @@ >> >> fi >> >> if test "x$TOOLCHAIN_TYPE" = xgcc; then >> >> - TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM -fcheck-new >> -fstack-protector" >> >> - TOOLCHAIN_CFLAGS_JDK="-pipe -fstack-protector" >> >> + TOOLCHAIN_CFLAGS_JVM="$TOOLCHAIN_CFLAGS_JVM -fcheck-new >> -fstack-protector -flto" >> >> + TOOLCHAIN_CFLAGS_JDK="-pipe -fstack-protector -flto" >> >> and you have to make sure to use gcc-ar and gcc-nm >> instead of ar / nm . >> >> Build and test(s) work, however with one exception. >> >> The serviceability tests like serviceability/sa seems to >> rely heavily on the "normal" structure of libjvm.so (from >> what I understand e.g. in LinuxVtblAccess it is attempted to >> access internal symbols like _ZTV ). >> >> Errors in the sa tests look like : >> >> java.lang.InternalError: Metadata does not appear to be polymorphic >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.types.basic.BasicTypeDataBase.findDynamicTypeForAddress(BasicTypeDataBase.java:279) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.runtime.VirtualBaseConstructor.instantiateWrapperFor(VirtualBaseConstructor.java:102) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.oops.Metadata.instantiateWrapperFor(Metadata.java:74) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.memory.SystemDictionary.getClassLoaderKlass(SystemDictionary.java:96) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.tools.ClassLoaderStats.printClassLoaderStatistics(ClassLoaderStats.java:93) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.tools.ClassLoaderStats.run(ClassLoaderStats.java:78) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.tools.JMap.run(JMap.java:115) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:262) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.tools.Tool.start(Tool.java:225) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.tools.Tool.execute(Tool.java:118) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.tools.JMap.main(JMap.java:176) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.SALauncher.runJMAP(SALauncher.java:321) >> >> at >> jdk.hotspot.agent/sun.jvm.hotspot.SALauncher.main(SALauncher.java:406) >> >> Has anyone experimented with LTO optimization ? >> >> >> Hi Matthias, >> >> We used to have LTO enabled on the old, closed-source Oracle arm-32 >> builds.
Re: RFR(S): 8231612: 100% cpu on arm32 in Service Thread
Thank you Kim and Dmitry for your reviews! On 11/12/2019 16:42, Kim Barrett wrote: >> On Dec 11, 2019, at 7:53 AM, Aleksei Voitylov >> wrote: >> >> Kim, >> >> You are right. Here is an updated webrev: >> >> http://cr.openjdk.java.net/~avoitylov/webrev.8231612.05/ > Looks good. >
Re: RFR(S): 8231612: 100% cpu on arm32 in Service Thread
Kim, You are right. Here is an updated webrev: http://cr.openjdk.java.net/~avoitylov/webrev.8231612.05/ -Aleksei On 11/12/2019 03:03, Kim Barrett wrote: >> On Dec 10, 2019, at 9:05 AM, Aleksei Voitylov >> wrote: >> >> Kim, >> >> thank you for the suggestions. Here is an updated webrev: >> >> http://cr.openjdk.java.net/~avoitylov/webrev.8231612.04/ >> >> The assembly generated with inline functions is equivalent to that with >> the macros, and the testing passed as well. > This is mostly good. > > However, the new function names don't follow usual HotSpot style, > which is lower case with underscore word separators. (Yes, there > are *many* counterexamples.) > > Also, the minimal parenthesizing in SetByteInInt requires one to think > about operator precedence order more than I would like. What's there > is correct, but would be easier to read as > > 855 return (n & ~(0xff << bitsIdx)) | (b << bitsIdx); > > (We have some functions in globalDefinitions.hpp to supposedly help > with these kinds of calculations, but the supported types are often > wrong (as here), and there are missing operations (again, as here).) >
Re: RFR(S): 8231612: 100% cpu on arm32 in Service Thread
Kim, thank you for the suggestions. Here is an updated webrev: http://cr.openjdk.java.net/~avoitylov/webrev.8231612.04/ The assembly generated with inline functions is equivalent to that with the macros, and the testing passed as well. -Aleksei On 07/12/2019 02:54, Kim Barrett wrote: >> On Dec 6, 2019, at 8:32 AM, Aleksei Voitylov >> wrote: >> >> Hi Kim, >> >> (cced hotspot-runtime as the updated webrev has code changes). >> >> Thank you for the suggestion to address this at the code level. The >> following change makes GCC generate correct code: >> >> webrev: http://cr.openjdk.java.net/~avoitylov/webrev.8231612.03/ >> issue: https://bugs.openjdk.java.net/browse/JDK-8231612 >> >> GCC alias analysis at RTL level has some limitations [1]. In particular, >> it sometimes cannot differentiate between pointer and integer and can >> make incorrect assumptions about whether two pointers actually alias >> each other. We are hitting this in >> Atomic::CmpxchgByteUsingInt::operator(), which is used on some platforms. >> >> The solution is to reference the same memory region using only one >> pointer. Anything with a second pointer here continues to confuse GCC, >> and would anyway be fragile in view of this GCC bug which surfaces >> easily on ARM32. >> >> I considered making get/set byte operations inline functions but >> preferred locality defines bring. >> >> Tested with JTReg, JCK and jcstress on ARM32 and Solaris SPARC with no >> regressions. The original problem also disappeared. >> No regressions in JCK VM,Lang, JTReg HotSpot on Linux x86, x86_64, >> AArch64, PPC, Mac. >> >> Thanks, >> >> -Aleksei >> >> [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=49330 >> >> On 19/11/2019 19:01, Kim Barrett wrote: >>>> On Nov 19, 2019, at 10:18 AM, Aleksei Voitylov >>>> wrote: >>>> >>>> Kim, >>>> >>>> you are right, to play it safe we need to -fno-tree-pre all relevant >>>> bool occurrances. I'll get back with an updated webrev when testing is >>>> finished. >>> I think just sprinkling the build system with -fno-tree-pre to address this >>> doesn’t >>> scale in the face of people writing new code. >>> >>> I think we need to find a way to write CmpxchgByteUsingInt in such a way >>> that >>> it doesn’t tickle this bug, or forbid byte-sized cmpxchg (which would be >>> painful), >>> or provide some entirely other implementation approach. > This looks like a good solution to me. > > (I like this approach even without the gcc bug (which was an > entertaining read itself); this change lets me delete an item from my > followup list from templatizing Atomic.) > > A couple of really minor nits: > > (1) In the macros there are various expressions of the form > >x << BitsPerByte * idx > > Please put parens around the shift calculation, e.g. > >x << (BitsPerByte * idx) > > so readers don't need to remember the operator precedence order. > > (2) In this line: > > 871 uint32_t cur = SET_BYTE_IN_INT(*aligned_dest, canon_compare_value, > idx); > > I'd like "*aligned_dest" replaced with "Atomic::load(aligned_dest)". > > In addition, I think functions would be preferable to macros for the > helpers. Put them in the Atomic::CmpxchgByteUsingInt class, with > inline definitions where you put the macro definitions. > >
Re: RFR(S): 8231612: 100% cpu on arm32 in Service Thread
Hi Kim, (cced hotspot-runtime as the updated webrev has code changes). Thank you for the suggestion to address this at the code level. The following change makes GCC generate correct code: webrev: http://cr.openjdk.java.net/~avoitylov/webrev.8231612.03/ issue: https://bugs.openjdk.java.net/browse/JDK-8231612 GCC alias analysis at RTL level has some limitations [1]. In particular, it sometimes cannot differentiate between pointer and integer and can make incorrect assumptions about whether two pointers actually alias each other. We are hitting this in Atomic::CmpxchgByteUsingInt::operator(), which is used on some platforms. The solution is to reference the same memory region using only one pointer. Anything with a second pointer here continues to confuse GCC, and would anyway be fragile in view of this GCC bug which surfaces easily on ARM32. I considered making get/set byte operations inline functions but preferred locality defines bring. Tested with JTReg, JCK and jcstress on ARM32 and Solaris SPARC with no regressions. The original problem also disappeared. No regressions in JCK VM,Lang, JTReg HotSpot on Linux x86, x86_64, AArch64, PPC, Mac. Thanks, -Aleksei [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=49330 On 19/11/2019 19:01, Kim Barrett wrote: >> On Nov 19, 2019, at 10:18 AM, Aleksei Voitylov >> wrote: >> >> Kim, >> >> you are right, to play it safe we need to -fno-tree-pre all relevant >> bool occurrances. I'll get back with an updated webrev when testing is >> finished. > I think just sprinkling the build system with -fno-tree-pre to address this > doesn’t > scale in the face of people writing new code. > > I think we need to find a way to write CmpxchgByteUsingInt in such a way that > it doesn’t tickle this bug, or forbid byte-sized cmpxchg (which would be > painful), > or provide some entirely other implementation approach. > > >
Re: RFR(S): 8231612: 100% cpu on arm32 in Service Thread
Kim, you are right, to play it safe we need to -fno-tree-pre all relevant bool occurrances. I'll get back with an updated webrev when testing is finished. -Aleksei On 19/11/2019 01:51, Kim Barrett wrote: >> On Nov 18, 2019, at 8:10 AM, Aleksei Voitylov >> wrote: >> >> Hi, >> >> please review this build-only workaround for a GCC bug [1]. The problem >> surfaced on ARM32 when new code [2] got inlined with old code. In GCC at >> RTL level there is no good distinction between pointer and int, which >> makes it hard for the aliasing code to correctly recognize base >> addresses for objects, which leads to incorrect aliasing. As a >> consequence one of the loads in this code was completely eliminated on >> ARM32 by DSE. All GCC versions I could reach are affected. >> >> Thanks to the GCC community, the fix will appear in GCC 10 but before it >> becomes mainstream we need a workaround. The workaround is to disable >> -ftree-pre optimization on ARM32 which triggers the bug. The problem >> does not surface on x86 in this code. >> >> issue: https://bugs.openjdk.java.net/browse/JDK-8231612 >> webrev: http://cr.openjdk.java.net/~avoitylov/webrev.8231612.02/ >> >> Testing on ARM32: the problem described in 8231612 (100% CPU utilization >> in Service Thread) no longer appears, the load is not eliminated as >> observed by disassembly. >> Testing on x86_64: linux-x86_64-server-release builds OK after this change. >> >> Thanks, >> >> -Aleksei >> >> [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92462 >> [2] https://bugs.openjdk.java.net/browse/JDK-8226366 > This change to limit the optimization of OopStorage.cpp doesn't seem > sufficient to me, based on my reading the gcc bug thread. From that > thread it sounds like any use of our CmpxchgByteUsingInt helper may be > miscompiled. (And who knows what else...) > > There are certainly other uses of that helper. On that platform I > think it gets used for any Atomic::cmpxchg of a bool value; at least, > that's where I *think* OopStorage is hitting it. (I'm guessing the > problem is the cmpxchg in has_cleanup_work_and_reset.) > > Just egrepping for "cmpxchg\((true|false)" finds > > ./share/gc/g1/g1RemSet.cpp: bool marked_as_dirty = Atomic::cmpxchg(true, > &_contains[region], false) == false; > ./share/gc/g1/g1RemSet.cpp:return !Atomic::cmpxchg(true, > &_collection_set_iter_state[region], false); > ./share/gc/g1/g1RemSet.cpp:!Atomic::cmpxchg(true, > &_fast_reclaim_handled, false)) { > ./share/gc/z/zRootsIterator.cpp: if (!_claimed && Atomic::cmpxchg(true, > &_claimed, false) == false) { > ./share/gc/z/zRootsIterator.cpp: if (!_claimed && Atomic::cmpxchg(true, > &_claimed, false) == false) { > ./share/gc/shared/oopStorage.cpp: return Atomic::cmpxchg(false, > _cleanup_requested, true); > ./share/gc/shared/ptrQueue.cpp: Atomic::cmpxchg(true, &_transfer_lock, > false)) { > ./share/services/memTracker.cpp: if (Atomic::cmpxchg(true, > _final_report_did_run, false) == false) { > > There may be others with non-literal new-value arguments. And there > may be non-bool byte-sized cmpxchg's, though I don't offhand know of any. > > The two in zRootsIterator don't matter for arm32 (ZGC isn't supported > on 32bit platforms), but is this problem limited to arm32? The gcc > bug thread suggests it isn't affecting x86_32/64, and might be limited > to arm(32?). > > Also, the latest comment in the gcc thread says the proposed fix > doesn't work, so maybe not (yet) fixed in gcc 10. > >
RFR(S): 8231612: 100% cpu on arm32 in Service Thread
Hi, please review this build-only workaround for a GCC bug [1]. The problem surfaced on ARM32 when new code [2] got inlined with old code. In GCC at RTL level there is no good distinction between pointer and int, which makes it hard for the aliasing code to correctly recognize base addresses for objects, which leads to incorrect aliasing. As a consequence one of the loads in this code was completely eliminated on ARM32 by DSE. All GCC versions I could reach are affected. Thanks to the GCC community, the fix will appear in GCC 10 but before it becomes mainstream we need a workaround. The workaround is to disable -ftree-pre optimization on ARM32 which triggers the bug. The problem does not surface on x86 in this code. issue: https://bugs.openjdk.java.net/browse/JDK-8231612 webrev: http://cr.openjdk.java.net/~avoitylov/webrev.8231612.02/ Testing on ARM32: the problem described in 8231612 (100% CPU utilization in Service Thread) no longer appears, the load is not eliminated as observed by disassembly. Testing on x86_64: linux-x86_64-server-release builds OK after this change. Thanks, -Aleksei [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92462 [2] https://bugs.openjdk.java.net/browse/JDK-8226366
RFR: JDK-8221988: add possibility to build with Visual Studio 2019
Hi, Though the fix itself is trivial, testing took some time. Sorry about that. I'd suggest to have Yasamasa Suenaga included as one of the authors of this patch as we crossed working in parallel on this and he provided valuable input on build testing on WSL. I'm equally fine if the patch from Yasamasa in this list above is taken, in which case I can submit another patch for devkit. Please review this fix to enable build with Visual Studio 2019. For Windows x86_64 this is a seamless migration, but I suggest to keep VS 2017 as the only supported and default for now. For those interested in VS 2019 runtime binary compatibility details, please refer to section "VC Runtime in the latest MSVC v142 toolset is binary compatible with v140 and v141" in [1]. webrev: http://cr.openjdk.java.net/~avoitylov/webrev.8221988.01/ issue: https://bugs.openjdk.java.net/browse/JDK-8221988 Build testing: Windows x86_64 product and fastdebug build on Windows 2012R2. Yasumasa Suenaga also reported successfully building on WSL. Testing: Windows 7, 10, 2012R2, 2016, 2019 64-bit jtreg with no regressions. generated devkit was used to build JDK 11 and test JCK and jtreg, with no regressions. I'm working on fixing Windows 32-bit with VS 2019 in a separate issue [2]. Thanks, -Aleksei [1] https://devblogs.microsoft.com/cppblog/cpp-binary-compatibility-and-pain-free-upgrades-to-visual-studio-2019/ [2] https://bugs.openjdk.java.net/browse/JDK-8222726
Re: RFC: JEP JDK-8208089: Implement C++14 Language Features
On 16/10/2018 02:25, Kim Barrett wrote: On Oct 15, 2018, at 7:51 AM, Aleksei Voitylov wrote: Kim, If you were suggesting to just proceed with the change without giving a sufficient lead time for the ports that were willing to upgrade to do so, that sounds very alarming. What is "sufficient lead time"? I'm not proposing an answer, just suggesting that 3 months (approx time between JEP publication and JDK 12 fork) might be sufficient, and a worthy goal. That's a decision that ought to be made as part of the Targeting discussion for this JEP. Right now, it's not even a Candidate, since there's still work to be done. I'm fine with that, and you probably can assume to document this goal in the JEP if noone speaks up. Meanwhile, our testing has finished and I'm now confident we will be able to switch ARM port over to 7.x in 12 time frame. There are several issues that we still need to diagnose, but this does not look like a blocker. That's good news, though pretty much what I expected. I'm more worried about Solaris. Having a goal of JDK 12 may help move things along.
Re: RFC: JEP JDK-8208089: Implement C++14 Language Features
Kim, If you were suggesting to just proceed with the change without giving a sufficient lead time for the ports that were willing to upgrade to do so, that sounds very alarming. Meanwhile, our testing has finished and I'm now confident we will be able to switch ARM port over to 7.x in 12 time frame. There are several issues that we still need to diagnose, but this does not look like a blocker. -Aleksei On 11/10/2018 22:23, Kim Barrett wrote: On Oct 8, 2018, at 4:45 PM, Aleksei Voitylov wrote: Kim, Let's not underestimate the importance of continuous testing throughout the release cycle. Over the last year alternative platforms and configurations were broken accidentally not once (and I think the number is reaching 50 or something). Suggesting a platform to be "not supported for a release or two" may mean by the time the compiler is good the amount of issues to fix for a platform to regain quality may become a blocker for the next LTS. I really hope this is not the option Oracle is proposing. My impression is that most of these were not toolchain issues per se. Rather, they were broken or incomplete changes in platform-dependent code that weren't tested on these "alternative" platforms before being pushed. Oracle has provided dev-submit so that non-Oracle folks can do some minimal testing on all the platforms that Oracle supports. I know that I would sometimes like to have similarly "easy" testing for various platforms not supported by Oracle. I didn't suggest "no testing" if there is a compiler version that supports the new language standards but has not yet been fully product-qualified by the people who are using it. I think gcc on arm may be in that category. But I think it would be very disappointing if the complete lack of a usable version of some compiler were to completely block us in this area. (Unfortunately, such a lack appears to be the situation for XLC compiler used for the AIX/ppc port.) The proposal is not very aggressive. We both know what and how long it takes to do a thorough OpenJDK compiler upgrade. If the community were made aware of this earlier, I would have definitely started planning for a compiler upgrade for ARM port in JDK 11. I'm understand that it takes time and effort to do a toolchain upgrade. And turning on support for new language version without changing the toolchain version isn't really all that different. This proposal didn't suddenly appear out of nowhere. There has been wistful discussion about using new language features for a long time, with an understanding that going anywhere with that was difficult because of some popular toolchain deficiencies (notably MSVC++) and the need to upgrade others. There have been ongoing efforts in this direction, e.g. various changes to support building with C++11/14 support enabled. Oracle made toolchain upgrades in JDK 11, in part to support the language standard change. (Unfortunately, the relevant toolchain upgrade JEP was labelled "Confidential", even though a lot of the work was in the open, and some of it was explicitly about dealing with differences arising from turning on C++11/14.) I think JDK 12 for this JEP is reasonable goal at this stage. Of course, that goal might not be achieved, for a variety of reasons. That's why it is not yet in the Targeted state and why there is a formal process for moving to that state; there's still work to be done, and we'll have to see how that work proceeds. With that, I conditionally agree with the proposal provided cooperating ports are given sufficient lead time to upgrade. We started testing ARM with 7.3 and will report on success.
Re: RFC: JEP JDK-8208089: Implement C++14 Language Features
Kim, Let's not underestimate the importance of continuous testing throughout the release cycle. Over the last year alternative platforms and configurations were broken accidentally not once (and I think the number is reaching 50 or something). Suggesting a platform to be "not supported for a release or two" may mean by the time the compiler is good the amount of issues to fix for a platform to regain quality may become a blocker for the next LTS. I really hope this is not the option Oracle is proposing. We both know what and how long it takes to do a thorough OpenJDK compiler upgrade. If the community were made aware of this earlier, I would have definitely started planning for a compiler upgrade for ARM port in JDK 11. With that, I conditionally agree with the proposal provided cooperating ports are given sufficient lead time to upgrade. We started testing ARM with 7.3 and will report on success. -Aleksei On 08/10/2018 22:34, Kim Barrett wrote: On Oct 6, 2018, at 11:07 AM, Aleksei Voitylov wrote: Kim, from an ARM port perspective, the stable GCC version is 4.9. The port compiles with stock GCC 7.3 but a lot of testing is required before moving to GCC 7.3. I agree on the overall direction and we'll commit resources to testing it further, but from an ARM port perspective it may happen JDK 12 is a little too optimistic. GCC x86 is a lot more stable than for ARM32 and AARCH64. It seems to me that JDK 11 being an LTS might provide an answer to this. JDK 12 support for C++11/14 on arm32/aarch64 might be somewhat "experimental" in that it might require a more recent compiler version that hasn't received as much testing as was done for JDK 11. Similarly, the AIX/ppc platform might not be supported after JDK 11 until an improved version of the relevant compiler becomes available. I don't know if such an approach is acceptable to the community, nor do I know how such a decision might be made. But I think it would be unfortunate if such issues blocked progress in this area.
Re: RFC: JEP JDK-8208089: Implement C++14 Language Features
Kim, from an ARM port perspective, the stable GCC version is 4.9. The port compiles with stock GCC 7.3 but a lot of testing is required before moving to GCC 7.3. I agree on the overall direction and we'll commit resources to testing it further, but from an ARM port perspective it may happen JDK 12 is a little too optimistic. GCC x86 is a lot more stable than for ARM32 and AARCH64. -Aleksei On 05/10/2018 00:09, Kim Barrett wrote: On Oct 4, 2018, at 9:17 AM, Aleksei Voitylov wrote: Kim, Thank you for posting this. It's an important step to make. I wanted to clarify a couple of points: 1. Since this is infrastructure JEP, is the version of JDK which will undergo such changes going to be some future version or does it apply to past versions as well? I'm concerned with how we can simplify security backports to 8u which (I currently assume) is not subject to this change. Future version (perhaps as soon as JDK 12). I don't think there is any desire to backport this change. And yes, introducing the use of new language features will make backports even more interesting than they already are. That seems unavoidable if we're going to pursue this direction. 2. Which versions of GCC do you tentatively consider at this point? Non-x86 ports may have a problem upgrading to a specific version of GCC which the shared code will use and may need additional lead time to adjust. I think our (ad hoc) testing has been limited to gcc 7.x. But looking at the gcc language conformance tables and release notes, gcc 5.x might be good enough, and gcc 6.x seems fairly likely sufficient. Of course, older versions are more likely to have bugs in some of these new features. Updating the minimum supported versions for gcc and clang are noted as TBD in the JEP.
Re: RFC: JEP JDK-8208089: Implement C++14 Language Features
Kim, Thank you for posting this. It's an important step to make. I wanted to clarify a couple of points: 1. Since this is infrastructure JEP, is the version of JDK which will undergo such changes going to be some future version or does it apply to past versions as well? I'm concerned with how we can simplify security backports to 8u which (I currently assume) is not subject to this change. 2. Which versions of GCC do you tentatively consider at this point? Non-x86 ports may have a problem upgrading to a specific version of GCC which the shared code will use and may need additional lead time to adjust. Thanks, -Aleksei On 03/10/2018 22:13, Kim Barrett wrote: I've submitted a JEP for (1) enabling the use of C++14 Language Features when building the JDK, (2) define a process for deciding and documenting which new features can be used or are forbidden in HotSpot code, (3) provide an initial list of permitted and forbidden new features. https://bugs.openjdk.java.net/browse/JDK-8208089
Re: RFR: 8209093 - JEP 340: One AArch64 Port, Not Two
Bob, Thank you for doing this. In the meanwhile, some of my fixes were pushed that invalidated your diff, for which I apologize. Here is an updated version of your patch which applies cleanly: http://cr.openjdk.java.net/~avoitylov/webrev.8209093.02/ -Aleksei On 23/09/2018 18:45, Boris Ulasevich wrote: Hi Bob, I checked your changes with jtreg and jck. I see no new fails introduced by this change. regards, Boris On 19.09.2018 13:30, Boris Ulasevich wrote: Hi Bob, Thank you for this job! I have started testing. Will come back with results next week :) The changes is Ok for me. Please see some minor comments below. regards, Boris On 17.09.2018 20:49, Aleksey Shipilev wrote: On 09/17/2018 07:00 PM, Vladimir Kozlov wrote: On 9/17/18 9:20 AM, Bob Vandette wrote: Please review the changes related to JEP 340 which remove the second 64-bit ARM port from the JDK.>> http://cr.openjdk.java.net/~bobv/8209093/webrev.01 I read through the webrev, and it seems to be the clean removal. It also feels very satisfying to drop 15 KLOC of ifdef-ed code. Minor nits: *) In src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp and src/hotspot/cpu/arm/interp_masm_arm.cpp we have "#if defined(ASSERT)", which cab be just "#ifdef ASSERT" now *) Ditto in src/hotspot/cpu/arm/jniFastGetField_arm.cpp we have "#if defined(__ABI_HARD__)" *) In src/hotspot/cpu/arm/macroAssembler_arm.hpp, the comment below seems to apply to AArch64 only. Yes, the note looks like AArch64 specific comment, but I think it is valid for arm32 too. So the change is Ok for me. Yet, only the first line of the comment is removed. I think we should instead convert that comment into "TODO:" mentioning this might be revised after AArch64 removal? 992 void branch_if_negative_32(Register r, Label& L) { 993 // Note about branch_if_negative_32() / branch_if_any_negative_32() implementation for AArch64: 994 // tbnz is not used instead of tst & b.mi because destination may be out of tbnz range (+-32KB) 995 // since these methods are used in LIR_Assembler::emit_arraycopy() to jump to stub entry. 996 tst_32(r, r); 997 b(L, mi); 998 } *) In src/hotspot/cpu/arm/stubGenerator_arm.cpp, lines L1204..1205, L1435..1436 can be merged together: 1203 // Generate the inner loop for shifted forward array copy (unaligned copy). 1204 // It can be used when bytes_per_count < wordSize, i.e. 1205 // byte/short copy 1434 // Generate the inner loop for shifted backward array copy (unaligned copy). 1435 // It can be used when bytes_per_count < wordSize, i.e. 1436 // byte/short copy *) In src/hotspot/cpu/arm/stubGenerator_arm.cpp, the comment changed incorrectly around L3363? I believe both (L3188 and L3363) comments should mention SP[0] (not R4) as an input parameter now. - // R4 (AArch64) / SP[0] (32-bit ARM) - element count (32-bit int) + // R4 - element count (32-bit int) *) In src/hotspot/cpu/arm/templateInterpreterGenerator_arm.cpp, "R4"/"R5" comments are missing: - const Register Rsig_handler = AARCH64_ONLY(R21) NOT_AARCH64(Rtmp_save0 /* R4 */); - const Register Rnative_code = AARCH64_ONLY(R22) NOT_AARCH64(Rtmp_save1 /* R5 */); + const Register Rsig_handler = Rtmp_save0; + const Register Rnative_code = Rtmp_save1; Thanks, -Aleksey
Re: [11] RFR (XXS): 8206903: Unable to build Client VM with JVMCI
Erik, Vladimir, thank you for review. Could either of you sponsor the change? -Aleksei On 11/07/2018 19:30, Vladimir Kozlov wrote: +1 Vladimir On 7/11/18 9:28 AM, Erik Joelsson wrote: Looks good. /Erik On 2018-07-11 08:38, Aleksei Voitylov wrote: Hi, I suggest to disable JVMCI in default Client VM build and leave it in Server VM. It’s hard to justify its presence in default Client VM. There is no Tiered Compilation support in Client VM. It also crashes Client VM during build. webrev: http://cr.openjdk.java.net/~dchuyko/aleksei.voitylov/8206903/webrev.01/ bug: https://bugs.openjdk.java.net/browse/JDK-8206903 OK for 11? Thanks, -Aleksei
[11] RFR (XXS): 8206903: Unable to build Client VM with JVMCI
Hi, I suggest to disable JVMCI in default Client VM build and leave it in Server VM. It’s hard to justify its presence in default Client VM. There is no Tiered Compilation support in Client VM. It also crashes Client VM during build. webrev: http://cr.openjdk.java.net/~dchuyko/aleksei.voitylov/8206903/webrev.01/ bug: https://bugs.openjdk.java.net/browse/JDK-8206903 OK for 11? Thanks, -Aleksei
Re: Supported platforms
Hi David, Speaking about the arm/ port, BellSoft has been releasing JCK-verified binaries (as provided under the OpenJDK license) from the arm/ port for the Raspberry Pi for JDK 9 [1] for a while and recently released one for JDK 10 [2], including OpenJFX and Minimal VM support. On Raspberry Pi 2 (ARMv7) and Raspberry Pi 3 (ARMv8 chip running Raspbian) the binaries produced from this port are passing all the required testing, including the new features recently open-sourced by Oracle (such as AppCDS). As far as JDK 11 is concerned, we are keeping track of the changes, recently fixed a couple of build issues that slipped in [3, 4], are working on Minimal Value Types support and, from what I can tell, will need to look into JFR and Nest-Based Access Control. Please let us know if we missed anything and we need to prepare for some other new features for porting. The intent is to keep the arm/ port in good shape and provide well-tested binaries for the Raspberry Pi. I believed Oracle was aware about BellSoft producing binaries from this port [5], [6]. Based on twitter, it seems like at least some engineers at Redhat and SAP are aware about it. Let me know if there is anything else we need to do to spread the word about it with Oracle engineering. For now, Boris (cced) is the engineer at BellSoft working on supporting the arm/ port for the Raspberry Pi. Other than that, I really wonder what "stepping up to take ownership of a port" means when it's upstream, and there is some procedure we need to follow. Thanks, -Aleksei [1] https://bell-sw.com/java-for-raspberry-pi-9.0.4.html [2] https://bell-sw.com/java-for-raspberry-pi.html [3] https://bugs.openjdk.java.net/browse/JDK-8200628 [4] https://bugs.openjdk.java.net/browse/JDK-8198949 [5] https://twitter.com/java/status/981239157874941955 [6] https://twitter.com/DonaldOJDK/status/981874485979746304 We are in a situation where previously "supported" platforms (by Oracle) are no longer supported as, AFAIK, no one has stepped up to take ownership of said platforms - which is a requirement for getting a new port accepted into mainline. Without such ownership the code may not only bit-rot, it may in time be stripped out completely. Any interested parties would then need to look at (re)forming a port project for that platform to keep it going in OpenJDK (or of course they are free to take it elsewhere). Cheers, David On 09/04/2018 18:35, White, Derek wrote: Hi Magnus, -Original Message- Date: Mon, 9 Apr 2018 09:55:09 +0200 From: Magnus Ihse BursieTo: Simon Nash , b...@juanantonio.info Cc: build-dev@openjdk.java.net, hotspot-dev developers Subject: Re: Supported platforms Message-ID: <4b1f262d-b9d2-6844-e453-dd53b42b2...@oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Simon, On 2018-04-08 16:30, Simon Nash wrote: On 05/04/2018 02:26, b...@juanantonio.info wrote: Many thanks with the link about the Platforms supported: http://www.oracle.com/technetwork/java/javase/documentation/jdk10cert config-4417031.html This appears to be a list of the platforms that are supported (certified) by Oracle.? Where can I find the list of platforms that are supported by OpenJDK?? For example, what about the following platforms that don't appear on the Oracle list: Windows x86 Linux x86 aarch32 (ARMv7 32-bit) aarch64 (ARMv8 64-bit) Are all these supported for OpenJDK 9, 10 and 11? There is actually no such thing as a "supported OpenJDK platform". While I hope things may change in the future, OpenJDK as an organization does not publicize any list of "supported" platforms. Oracle publishes a list of platforms they support, and I presume that Red Hat and SAP and others do the same, but the OpenJDK project itself does not. With that said, platforms which were previously supported by Oracle (like the one's you mentioned) tend to still work more-or-less well, but they receive no or little testing, and is prone to bit rot. /Magnus Surely you meant to say "receive no or little testing BY ORACLE, and ORACLE IS NOT RESPONSIBLE FOR ANY bit rot." I haven't found a definitive list of supported OpenJDK platforms, but have an ad-hoc list of publicly available binaries: - Major linux distros are supporting x64 and aarch64 (arm64), and probably other platforms. - AdoptOpenJDK provides tested builds for most 64-bit platforms (x64, aarch64, ppc64, s390). - https://adoptopenjdk.net/releases.html - Bellsoft provides support for 32-bit ARMv7. - https://bell-sw.com/products.html - Azul provides 32-bit x86 and ARMv7 binaries as well as 64-bit x86 and aarch64. - https://www.azul.com/downloads/zulu/ I'm sure there are several others I've missed - sorry! - Derek
Re: Supported platforms
David, see inline: On 10/04/2018 11:00, David Holmes wrote: Hi Aleksei, This is all news to me. Good news, but unexpected. As far as I was aware the 32-bit ARM port was dying a slow death and would eventually get removed. 64-bit ARM is of course very much alive and well under the Aarch64 porters - though I'm unclear if you are using that code for ARMv8 or the Oracle contributed code that used to be closed? You are welcome :) We are doing everything possible to keep it running, so I don't see any reason within OpenJDK to it being removed. Regarding ARMv8 port, we are working with Cavium, Redhat, and Linaro on supporting the AARCH64 port. I'm also unclear whether you are pushing changes back up to OpenJDK for these platforms, or maintaining them locally? I haven't noticed anything (other than build tweaks) and am curious about the release of a Minimal VM for JDK 10 given the Minimal VM effectively died along with the stand-alone Client VM. We push everything upstream. I'm not sure why you believe Minimal VM and Client VM died in OpenJDK 10. From what I remember, there was some decision related to Client VM for Oracle binaries, but support for C1 and Minimal VM was not removed from OpenJDK codebase. This is what I get with BellSoft Liberica binaries built from OpenJDK on Raspberry Pi: pi@rpi-3:~ $ java -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Server VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Minimal VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -client -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Client VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -XX:+PrintFlagsFinal HW | grep C1 bool C1OptimizeVirtualCallProfiling = true {C1 product} {default} bool C1ProfileBranches = true {C1 product} {default} bool C1ProfileCalls = true {C1 product} {default} bool C1ProfileCheckcasts = true {C1 product} {default} bool C1ProfileInlinedCalls = true {C1 product} {default} bool C1ProfileVirtualCalls = true {C1 product} {default} bool C1UpdateMethodData = false {C1 product} {default} bool InlineSynchronizedMethods = true {C1 product} {default} bool LIRFillDelaySlots = false {C1 pd product} {default} bool TimeLinearScan = false {C1 product} {default} bool UseLoopInvariantCodeMotion = true {C1 product} {default} intx ValueMapInitialSize = 11 {C1 product} {default} intx ValueMapMaxLoopSize = 8 {C1 product} {default} Minimal VM and Client VM include C1, and Server VM includes C1 and C2. All (Client VM, Server VM, Minimal VM) were tested and work as expected. For JDK11 you will need to do some work for Condy (if not already done) as well as JFR and Nest-based Access Control (which you can see in the nestmates branch of the Valhalla repo), as you mention below. Not sure what else may be needed. There's been a lot of code refactoring and include file changes that have impact on platform specific code as well. Thanks for the heads-up! -Aleksei Cheers, David On 10/04/2018 5:23 PM, Aleksei Voitylov wrote: Hi David, Speaking about the arm/ port, BellSoft has been releasing JCK-verified binaries (as provided under the OpenJDK license) from the arm/ port for the Raspberry Pi for JDK 9 [1] for a while and recently released one for JDK 10 [2], including OpenJFX and Minimal VM support. On Raspberry Pi 2 (ARMv7) and Raspberry Pi 3 (ARMv8 chip running Raspbian) the binaries produced from this port are passing all the required testing, including the new features recently open-sourced by Oracle (such as AppCDS). As far as JDK 11 is concerned, we are keeping track of the changes, recently fixed a couple of build issues that slipped in [3, 4], are working on Minimal Value Types support and, from what I can tell, will need to look into JFR and Nest-Based Access Control. Please let us know if we missed anything and we need to prepare for some other new features for porting. The intent is to keep the arm/ port in good shape and
Re: Supported platforms
David, Sorry I was mis-remembering. Of course C1 and Minimal still exist in the 32-bit code. Though I would be concerned that with a focus on 64-bit it will be easy for engineers to overlook things that should be inside one of the INCLUDE_XXX guards. (Particularly as we don't do any 32-bit builds and for the most part don't even have the capability to perform them.) It seems like BellSoft is not alone in building 32-bit code so we hope the community will notice if something gets broken. That said, it does not seem like too big of a deal to have, say, Linux x86 32 bit builds running in any build farm. This should catch most of the INCLUDE_XXX types of errors. -Aleksei On 10/04/2018 12:24, David Holmes wrote: On 10/04/2018 7:17 PM, Aleksei Voitylov wrote: David, see inline: On 10/04/2018 11:00, David Holmes wrote: Hi Aleksei, This is all news to me. Good news, but unexpected. As far as I was aware the 32-bit ARM port was dying a slow death and would eventually get removed. 64-bit ARM is of course very much alive and well under the Aarch64 porters - though I'm unclear if you are using that code for ARMv8 or the Oracle contributed code that used to be closed? You are welcome :) We are doing everything possible to keep it running, so I don't see any reason within OpenJDK to it being removed. Regarding ARMv8 port, we are working with Cavium, Redhat, and Linaro on supporting the AARCH64 port. I'm also unclear whether you are pushing changes back up to OpenJDK for these platforms, or maintaining them locally? I haven't noticed anything (other than build tweaks) and am curious about the release of a Minimal VM for JDK 10 given the Minimal VM effectively died along with the stand-alone Client VM. We push everything upstream. I don't recall seeing anything related to 32-bit ARM, but perhaps it's all in areas I don't see (like compiler and gc). I'm not sure why you believe Minimal VM and Client VM died in OpenJDK 10. From what I remember, there was some decision related to Client VM for Oracle binaries, but support for C1 and Minimal VM was not removed from OpenJDK codebase. This is what I get with BellSoft Liberica binaries built from OpenJDK on Raspberry Pi: Sorry I was mis-remembering. Of course C1 and Minimal still exist in the 32-bit code. Though I would be concerned that with a focus on 64-bit it will be easy for engineers to overlook things that should be inside one of the INCLUDE_XXX guards. (Particularly as we don't do any 32-bit builds and for the most part don't even have the capability to perform them.) Cheers, David pi@rpi-3:~ $ java -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Server VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Minimal VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -client -version openjdk version "10-BellSoft" 2018-03-20 OpenJDK Runtime Environment (build 10-BellSoft+0) OpenJDK Client VM (build 10-BellSoft+0, mixed mode) pi@rpi-3:~ $ java -minimal -XX:+PrintFlagsFinal HW | grep C1 bool C1OptimizeVirtualCallProfiling = true {C1 product} {default} bool C1ProfileBranches = true {C1 product} {default} bool C1ProfileCalls = true {C1 product} {default} bool C1ProfileCheckcasts = true {C1 product} {default} bool C1ProfileInlinedCalls = true {C1 product} {default} bool C1ProfileVirtualCalls = true {C1 product} {default} bool C1UpdateMethodData = false {C1 product} {default} bool InlineSynchronizedMethods = true {C1 product} {default} bool LIRFillDelaySlots = false {C1 pd product} {default} bool TimeLinearScan = false {C1 product} {default} bool UseLoopInvariantCodeMotion = true {C1 product} {default} intx ValueMapInitialSize = 11 {C1 product} {default} intx ValueMapMaxLoopSize = 8 {C1 product} {default} Minimal VM and Client VM include C1, and Server VM includes C1 and C2. All (Client VM, Server VM, Minimal VM) were tested and work as expected. For JDK11 you will need to do some work for Condy (if not already done) as well as JFR and Nest-based Acce