Re: RFR: 7133124 Remove redundant packages from JAR command line
John, Actually the goal of my letter is not to promote new integration scheme. Just to remind that we need to put some efforts to internal process review and optimization. But, see answers below (inline): Integration method I mentioned often used in open source projects, because it doesn't require any special infrastructure for external commiters. The only necessary thing to do safe commit is a write access to integration (-gate) workspace. On 2012-01-30 06:35, John Coomes wrote: We have chosen a model: build-test-integrate but we may consider different approach: integrate-build-test-[backout if necessary] In that model, you can never rely on the repository having any degree of stability. It may not even build at a given moment. What happens today if Developer A and Developer B changes the same line of a source? What happens today if Developer A changes some_func() but Developer B rely on some_func() ? We would get a fault *after* all integration tests and SQE file one more nightly bug. To the time someone investigate it and give the fix, bad code will be distributed to all dev workspaces. Developer (A) integrate his changeset to an integration workspace Bot takes snapshot and start building/testing Developer (B) integrate his changeset to an integration workspace Bot takes snapshot and start building/testing if Job A failed, bot lock integration ws, restore it to pre-A state, apply B-patch. unlock ws. Don't forget the trusting souls that pulled from the integration repo after A inflicted the breakage: they each waste time cleaning up a copy of A's mess. Nobody pulls from -gate repository today and nobody expected to do it. -gate to ws merge continues as usual. To remove faulty changeset we need about fifteen minutes for whole jdk at worst. -Dmitry -John On 2012-01-29 23:52, Kelly O'Hair wrote: On Jan 29, 2012, at 10:23 AM, Georges Saab wrote: I'm missing something. How can everybody using the exact same system scale to 100's of developers? System = distributed build and test of OpenJDK Ah ha... I'm down in the trenches dealing with dozens of different OS's arch's variation machines. You are speaking to a higher level, I need to crawl out of the basement. Developers send in jobs Jobs are distribute across a pool of (HW/OS) resources The resources may be divided into pools dedicated to different tasks (RE/checkin/perf/stress) The pools are populated initially according to predictions of load and then increased/rebalanced according to data on actual usage No assumptions made about what exists on the machine other than HW/OS The build and test tasks are self sufficient, i.e. bootstrap themselves The bootstrapping is done in the same way for different build and test tasks Understood. We have talked about this before. I have also been on the search for the Holy Grail. ;^) This is why I keep working on JPRT. The only scaling aspect that seems at all challenging is that the current checkin system is designed to serialize checkins in a way that apparently does not scale -- here there are some decisions to be made and tradeoffs but this is nothing new in the world of Open community development (or any large team development for that matter) The serialize checkins issue can be minimized some by using distributed SCMs (Mercurial, Git, etc) and using separate forests (fewer developers per source repository means fewer merge/sync issues) and having an integrator merge into a master. This has proven to work in many situations but it also creates delivery to master delays, especially if the integration process is too heavyweight. The JDK projects has been doing this for a long time, I'm sure many people have opinions as to how successful it is or isn't. It is my opinion that merges/syncs are some of the most dangerous things you can do to a source base, and anything we can do to avoid them is usually goodness, I don't think you should scale this without some very great care. And that one system will naturally change over time too, so unless you are able to prevent all change to a system (impossible with security updates etc) every use of that 'same system' will be different. Yes, but it is possible to control this update and have a staging environment so you know that a HW/OS update will not break the existing successful build when rolled out to the build/test farm. Possible but not always easy. The auto updating of everything has increased significantly over the years, making it harder to control completely. I've been doing this buildtest stuff long enough to never expect anything to be 100% reliable. Hardware fails, software updates regress functionality, networks become unreliable, humans trip over power cords, virus scanners break things, etc. It just happens, and often, it's not very predictable or reproducible. You can do lots of things to minimize issues, but at some point you just have to accept a few risks because
Re: RFR: 7133124 Remove redundant packages from JAR command line
Dmitry, I think this discussion diverged somewhat from the original topic, but I do agree with you that we must also attack the problem on a process level. With the model you propose (and also the existing model) I would also like to stress the need for continuous and automatic builds triggered by incoming new changes compared to the last working change. Having that, it is possible to update labels (tags) for last_clean_build, last_nightly_build, etc. That way any build breakage would only be visible at the tip. However, when submitting a new change care should be taken to do it against a working tip, that it builds and tests correctly (personal check-in testing). Actually, this is close to the model we had for the JRockit source base. WLS also uses the model of sliding labels for last_clean_build where developers most often only do partial builds themselves. Regarding external committers, I think they need both the option of building and testing locally, as well as access to an Open JDK specific queue for build and test submissions. By making use of cross compilation and open tools it is possible to at least verify that the product builds locally. Even better is to also supply preconfigured VMs with the necessary standard build and test environment (e.g. obsolete Solaris or Linux distros that we require). In general, having a build and test setup that is automatically configurable (including Windows) will help both internal and external developers (see also the build-infra project). /Robert On 01/30/2012 10:09 AM, Dmitry Samersoff wrote: John, Actually the goal of my letter is not to promote new integration scheme. Just to remind that we need to put some efforts to internal process review and optimization. But, see answers below (inline): Integration method I mentioned often used in open source projects, because it doesn't require any special infrastructure for external commiters. The only necessary thing to do safe commit is a write access to integration (-gate) workspace. On 2012-01-30 06:35, John Coomes wrote: We have chosen a model: build-test-integrate but we may consider different approach: integrate-build-test-[backout if necessary] In that model, you can never rely on the repository having any degree of stability. It may not even build at a given moment. What happens today if Developer A and Developer B changes the same line of a source? What happens today if Developer A changes some_func() but Developer B rely on some_func() ? We would get a fault *after* all integration tests and SQE file one more nightly bug. To the time someone investigate it and give the fix, bad code will be distributed to all dev workspaces. Developer (A) integrate his changeset to an integration workspace Bot takes snapshot and start building/testing Developer (B) integrate his changeset to an integration workspace Bot takes snapshot and start building/testing if Job A failed, bot lock integration ws, restore it to pre-A state, apply B-patch. unlock ws. Don't forget the trusting souls that pulled from the integration repo after A inflicted the breakage: they each waste time cleaning up a copy of A's mess. Nobody pulls from -gate repository today and nobody expected to do it. -gate to ws merge continues as usual. To remove faulty changeset we need about fifteen minutes for whole jdk at worst. -Dmitry -John On 2012-01-29 23:52, Kelly O'Hair wrote: On Jan 29, 2012, at 10:23 AM, Georges Saab wrote: I'm missing something. How can everybody using the exact same system scale to 100's of developers? System = distributed build and test of OpenJDK Ah ha... I'm down in the trenches dealing with dozens of different OS's arch's variation machines. You are speaking to a higher level, I need to crawl out of the basement. Developers send in jobs Jobs are distribute across a pool of (HW/OS) resources The resources may be divided into pools dedicated to different tasks (RE/checkin/perf/stress) The pools are populated initially according to predictions of load and then increased/rebalanced according to data on actual usage No assumptions made about what exists on the machine other than HW/OS The build and test tasks are self sufficient, i.e. bootstrap themselves The bootstrapping is done in the same way for different build and test tasks Understood. We have talked about this before. I have also been on the search for the Holy Grail. ;^) This is why I keep working on JPRT. The only scaling aspect that seems at all challenging is that the current checkin system is designed to serialize checkins in a way that apparently does not scale -- here there are some decisions to be made and tradeoffs but this is nothing new in the world of Open community development (or any large team development for that matter) The serialize checkins issue can be minimized some by using distributed SCMs (Mercurial, Git, etc) and using
Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
Please review the following fix. Webrev: http://cr.openjdk.java.net/~sla/7132199/webrev.00/ Bug: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7132199 The problem is that HotSpot will always create the .java_pid1234 socket/door file in /tmp (see CR 7009828). The JDK will currently look first in the current directory then in java.io.tmpdir. If java.io.tmpdir has the default value of /tmp this works, but if the user has set it to something else it doesn't. My fix hardcodes /tmp in LinuxVirtualMachine.java and SolarisVirtualMachine.java. The same fix will be needed in BsdVirtualMachine.java eventually. Thanks, /Staffan
Re: Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
Staffan, 1. Why we can't use System.getProperty(java.io.tmpdir) ? 2. If you decide to hardcode /tmp please, create a global constant for it. -Dmitry On 2012-01-30 14:05, Staffan Larsen wrote: Please review the following fix. Webrev: http://cr.openjdk.java.net/~sla/7132199/webrev.00/ Bug: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7132199 The problem is that HotSpot will always create the .java_pid1234 socket/door file in /tmp (see CR 7009828). The JDK will currently look first in the current directory then in java.io.tmpdir. If java.io.tmpdir has the default value of /tmp this works, but if the user has set it to something else it doesn't. My fix hardcodes /tmp in LinuxVirtualMachine.java and SolarisVirtualMachine.java. The same fix will be needed in BsdVirtualMachine.java eventually. Thanks, /Staffan -- Dmitry Samersoff Java Hotspot development team, SPB04 * There will come soft rains ...
Re: Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
Hi Staffan, I'm somewhat confused by this problem. 6938627 is the CR that changed from /tmp to use the java.io.tmpdir property, but that CR did not modify the files that you have modified - so what broke these files? Looking at your fix, wouldn't it be simpler to just set the static tmpdir to /tmp and leave the rest of the code as-is? Why do you stop looking in the cwd in addition to changing the tmp location? Is it because hotspot will never write it there? Thanks, David On 30/01/2012 8:05 PM, Staffan Larsen wrote: Please review the following fix. Webrev: http://cr.openjdk.java.net/~sla/7132199/webrev.00/ Bug: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7132199 The problem is that HotSpot will always create the .java_pid1234 socket/door file in /tmp (see CR 7009828). The JDK will currently look first in the current directory then in java.io.tmpdir. If java.io.tmpdir has the default value of /tmp this works, but if the user has set it to something else it doesn't. My fix hardcodes /tmp in LinuxVirtualMachine.java and SolarisVirtualMachine.java. The same fix will be needed in BsdVirtualMachine.java eventually. Thanks, /Staffan
Re: Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
On 30 jan 2012, at 12:23, Dmitry Samersoff wrote: 1. Why we can't use System.getProperty(java.io.tmpdir) ? Since HotSpot and the tools run in separate processes it is important that the file is in a well-known global location. It cannot be in different locations for different processes. Since java.io.tmpdir can be set on the command line (which JPRT does), it can be different in the tools and in HotSpot. HotSpot will always write the file to /tmp/.java_pidXXX regardless of the value of java.io.tmpdir (see 7009828). Thus, the tools need to always look there. 2. If you decide to hardcode /tmp please, create a global constant for it. I don't agree that this would make the code easier to read or maintain. I should, however, include a comment saying that the file is always in /tmp regardless of the value of java.io.tmpdir. On 30 jan 2012, at 12:49, David Holmes wrote: I'm somewhat confused by this problem. 6938627 is the CR that changed from /tmp to use the java.io.tmpdir property, but that CR did not modify the files that you have modified - so what broke these files? 7009828 then changed HotSpot back to use /tmp always. I seems like this has been going back and forth for a while… I don't really know the whole history. Looking at your fix, wouldn't it be simpler to just set the static tmpdir to /tmp and leave the rest of the code as-is? Why do you stop looking in the cwd in addition to changing the tmp location? Is it because hotspot will never write it there? Changing the tmpdir static would be a smaller fix, but all the cwd code would then remain. Yes, HotSpot never writes to cwd. Thanks, /Staffan
hg: hsx/hotspot-rt/hotspot: 7022100: Method annotations are incorrectly set when redefining classes
Changeset: 26a08cbbf042 Author:stefank Date: 2012-01-27 13:46 +0100 URL: http://hg.openjdk.java.net/hsx/hotspot-rt/hotspot/rev/26a08cbbf042 7022100: Method annotations are incorrectly set when redefining classes Summary: Changed to the correct annotation arrays Reviewed-by: kamg, dholmes, sla ! src/share/vm/oops/instanceKlass.hpp
hg: jdk8/tl/jdk: 7132378: Race in FutureTask if used with explicit set ( not Runnable )
Changeset: f9fb8c4b4550 Author:dl Date: 2012-01-30 11:44 + URL: http://hg.openjdk.java.net/jdk8/tl/jdk/rev/f9fb8c4b4550 7132378: Race in FutureTask if used with explicit set ( not Runnable ) Reviewed-by: chegar, dholmes ! src/share/classes/java/util/concurrent/FutureTask.java + test/java/util/concurrent/FutureTask/DoneTimedGetLoops.java + test/java/util/concurrent/FutureTask/ExplicitSet.java
hg: hsx/hotspot-rt/hotspot: 2 new changesets
Changeset: f457154eee8b Author:brutisso Date: 2012-01-30 12:36 +0100 URL: http://hg.openjdk.java.net/hsx/hotspot-rt/hotspot/rev/f457154eee8b 7140882: Don't return booleans from methods returning pointers Summary: Changed return false to return NULL Reviewed-by: dholmes, rottenha Contributed-by: dbh...@redhat.com ! src/share/vm/oops/constantPoolOop.cpp ! src/share/vm/opto/loopnode.cpp Changeset: d96c130c9399 Author:brutisso Date: 2012-01-30 05:08 -0800 URL: http://hg.openjdk.java.net/hsx/hotspot-rt/hotspot/rev/d96c130c9399 Merge
Re: Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
On 1/30/12 3:05 AM, Staffan Larsen wrote: Please review the following fix. Webrev: http://cr.openjdk.java.net/~sla/7132199/webrev.00/ http://cr.openjdk.java.net/%7Esla/7132199/webrev.00/ Bug: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7132199 The problem is that HotSpot will always create the .java_pid1234 socket/door file in /tmp (see CR 7009828). The JDK will currently look first in the current directory then in java.io.tmpdir. If java.io.tmpdir has the default value of /tmp this works, but if the user has set it to something else it doesn't. My fix hardcodes /tmp in LinuxVirtualMachine.java and SolarisVirtualMachine.java. The same fix will be needed in BsdVirtualMachine.java eventually. Please fix BsdVirtualMachine.java at the same time. Dan Thanks, /Staffan
Re: RFR: 7133124 Remove redundant packages from JAR command line
Diverged and in transit to another planet. :^( A push to a shared repo without verifying it builds on all supported platforms is risky behavior, and one that can consume needless resources finding out it doesn't build, and more importantly waste your co-worker's time undoing it. We have the ability to prevent that, and we should. And the JDK is not just a VM, and cross compilation does not address Windows or Solaris. This is a much more difficult situation than most people think it is. Yes we can change things to simplify it, make it better, but don't kid yourself, this is more complex than most people think. Anyone that hasn't built the entire JDK on Windows, successfully, should try it. -kto On Jan 30, 2012, at 2:00 AM, Robert Ottenhag wrote: Dmitry, I think this discussion diverged somewhat from the original topic, but I do agree with you that we must also attack the problem on a process level. With the model you propose (and also the existing model) I would also like to stress the need for continuous and automatic builds triggered by incoming new changes compared to the last working change. Having that, it is possible to update labels (tags) for last_clean_build, last_nightly_build, etc. That way any build breakage would only be visible at the tip. However, when submitting a new change care should be taken to do it against a working tip, that it builds and tests correctly (personal check-in testing). Actually, this is close to the model we had for the JRockit source base. WLS also uses the model of sliding labels for last_clean_build where developers most often only do partial builds themselves. Regarding external committers, I think they need both the option of building and testing locally, as well as access to an Open JDK specific queue for build and test submissions. By making use of cross compilation and open tools it is possible to at least verify that the product builds locally. Even better is to also supply preconfigured VMs with the necessary standard build and test environment (e.g. obsolete Solaris or Linux distros that we require). In general, having a build and test setup that is automatically configurable (including Windows) will help both internal and external developers (see also the build-infra project). /Robert On 01/30/2012 10:09 AM, Dmitry Samersoff wrote: John, Actually the goal of my letter is not to promote new integration scheme. Just to remind that we need to put some efforts to internal process review and optimization. But, see answers below (inline): Integration method I mentioned often used in open source projects, because it doesn't require any special infrastructure for external commiters. The only necessary thing to do safe commit is a write access to integration (-gate) workspace. On 2012-01-30 06:35, John Coomes wrote: We have chosen a model: build-test-integrate but we may consider different approach: integrate-build-test-[backout if necessary] In that model, you can never rely on the repository having any degree of stability. It may not even build at a given moment. What happens today if Developer A and Developer B changes the same line of a source? What happens today if Developer A changes some_func() but Developer B rely on some_func() ? We would get a fault *after* all integration tests and SQE file one more nightly bug. To the time someone investigate it and give the fix, bad code will be distributed to all dev workspaces. Developer (A) integrate his changeset to an integration workspace Bot takes snapshot and start building/testing Developer (B) integrate his changeset to an integration workspace Bot takes snapshot and start building/testing if Job A failed, bot lock integration ws, restore it to pre-A state, apply B-patch. unlock ws. Don't forget the trusting souls that pulled from the integration repo after A inflicted the breakage: they each waste time cleaning up a copy of A's mess. Nobody pulls from -gate repository today and nobody expected to do it. -gate to ws merge continues as usual. To remove faulty changeset we need about fifteen minutes for whole jdk at worst. -Dmitry -John On 2012-01-29 23:52, Kelly O'Hair wrote: On Jan 29, 2012, at 10:23 AM, Georges Saab wrote: I'm missing something. How can everybody using the exact same system scale to 100's of developers? System = distributed build and test of OpenJDK Ah ha... I'm down in the trenches dealing with dozens of different OS's arch's variation machines. You are speaking to a higher level, I need to crawl out of the basement. Developers send in jobs Jobs are distribute across a pool of (HW/OS) resources The resources may be divided into pools dedicated to different tasks (RE/checkin/perf/stress) The pools are populated initially according to predictions of load and then increased/rebalanced according to
Re: Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
On 2012-01-30 16:28, Staffan Larsen wrote: 2. If you decide to hardcode /tmp please, create a global constant for it. I don't agree that this would make the code easier to read or maintain. I should, however, include a comment saying that the file is always in /tmp regardless of the value of java.io.tmpdir. /tmp is common but not mandatory, especially if we speak about embedded systems. Native code should use P_tmpdir constant from stdio.h rather than hardcode /tmp. As we can't access it from java I recommend to create a global constant somewhere to reduce possible future porting efforts. Changing the tmpdir static would be a smaller fix, but all the cwd code would then remain. Yes, HotSpot never writes to cwd. I agree with Staffan, that looks for socket/door in cwd should be removed. -Dmitry -- Dmitry Samersoff Java Hotspot development team, SPB04 * There will come soft rains ...
RE: RFR: 7133124 Remove redundant packages from JAR command line
Inline, -Original Message- From: Kelly O'Hair Sent: Monday, January 30, 2012 6:27 PM To: Robert Ottenhag Cc: Dmitry Samersoff; serviceability-dev@openjdk.java.net; John Coomes; build-...@openjdk.java.net Subject: Re: RFR: 7133124 Remove redundant packages from JAR command line Diverged and in transit to another planet. :^( A push to a shared repo without verifying it builds on all supported platforms is risky behavior, and one that can consume needless resources finding out it doesn't build, and more importantly waste your co-worker's time undoing it. We have the ability to prevent that, and we should. I totally agree. Passing suitable build and test requirements (check-in testing) is crucial (having 100+ developers waiting for a build fix is a _bad_ thing). And the JDK is not just a VM, and cross compilation does not address Windows or Solaris. It does not, but it can limit the number of build platforms that need to be maintained. See also my suggestion on an open JRPT queue and preconfigured virtual machines for build and test. Windows and Linux should cross build 32_to_64bit and 64_to_32bit, and Solaris can be helped by a virtual box installation of Solaris/{i586,x64}. This is a much more difficult situation than most people think it is. Yes we can change things to simplify it, make it better, but don't kid yourself, this is more complex than most people think. True, but aiming for simpler and better is always beneficial. Anyone that hasn't built the entire JDK on Windows, successfully, should try it. I have ;-) /Robert -kto On Jan 30, 2012, at 2:00 AM, Robert Ottenhag wrote: Dmitry, I think this discussion diverged somewhat from the original topic, but I do agree with you that we must also attack the problem on a process level. With the model you propose (and also the existing model) I would also like to stress the need for continuous and automatic builds triggered by incoming new changes compared to the last working change. Having that, it is possible to update labels (tags) for last_clean_build, last_nightly_build, etc. That way any build breakage would only be visible at the tip. However, when submitting a new change care should be taken to do it against a working tip, that it builds and tests correctly (personal check- in testing). Actually, this is close to the model we had for the JRockit source base. WLS also uses the model of sliding labels for last_clean_build where developers most often only do partial builds themselves. Regarding external committers, I think they need both the option of building and testing locally, as well as access to an Open JDK specific queue for build and test submissions. By making use of cross compilation and open tools it is possible to at least verify that the product builds locally. Even better is to also supply preconfigured VMs with the necessary standard build and test environment (e.g. obsolete Solaris or Linux distros that we require). In general, having a build and test setup that is automatically configurable (including Windows) will help both internal and external developers (see also the build-infra project). /Robert On 01/30/2012 10:09 AM, Dmitry Samersoff wrote: John, Actually the goal of my letter is not to promote new integration scheme. Just to remind that we need to put some efforts to internal process review and optimization. But, see answers below (inline): Integration method I mentioned often used in open source projects, because it doesn't require any special infrastructure for external commiters. The only necessary thing to do safe commit is a write access to integration (-gate) workspace. On 2012-01-30 06:35, John Coomes wrote: We have chosen a model: build-test-integrate but we may consider different approach: integrate-build-test-[backout if necessary] In that model, you can never rely on the repository having any degree of stability. It may not even build at a given moment. What happens today if Developer A and Developer B changes the same line of a source? What happens today if Developer A changes some_func() but Developer B rely on some_func() ? We would get a fault *after* all integration tests and SQE file one more nightly bug. To the time someone investigate it and give the fix, bad code will be distributed to all dev workspaces. Developer (A) integrate his changeset to an integration workspace Bot takes snapshot and start building/testing Developer (B) integrate his changeset to an integration workspace Bot takes snapshot and start building/testing if Job A failed, bot lock integration ws, restore it to pre-A state, apply B-patch. unlock ws. Don't forget the trusting souls that pulled from the integration repo after A inflicted the breakage: they each waste time cleaning up a copy of A's mess. Nobody pulls from -gate
Re: RFR: 7133124 Remove redundant packages from JAR command line
On 01/30/2012 09:41 AM, Robert Ottenhag wrote: A push to a shared repo without verifying it builds on all supported platforms is risky behavior, and one that can consume needless resources finding out it doesn't build, and more importantly waste your co-worker's time undoing it. We have the ability to prevent that, and we should. I totally agree. Passing suitable build and test requirements (check-in testing) is crucial (having 100+ developers waiting for a build fix is a_bad_ thing). We need to be careful here. We have the ability to totally overload any reasonable build system, and going through a full build and test for what are sometimes small changes may be impractical (having 100+ developers waiting for the build queue is also a _bad_ thing.) As Joe Darcy often points out, we need to be careful that we are not so afraid of doing anything bad that we prevent anything good from happening. Note, I am /not/ saying we don't need a good build and test system. We do. We just need to be careful how we mandate its use. -- Jon
Re: Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
On 30 jan 2012, at 18:05, Daniel D. Daugherty wrote: The same fix will be needed in BsdVirtualMachine.java eventually. Please fix BsdVirtualMachine.java at the same time. Yeah, I'd love to, but it's still in the osx-port repo and this fixes for jdk8 (and then back port to jdk7). I'll file a separate bug for BsdVirtualMachine.java and fix that, too. /Staffan
Re: Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
On 1/30/12 11:40 AM, Staffan Larsen wrote: On 30 jan 2012, at 18:05, Daniel D. Daugherty wrote: The same fix will be needed in BsdVirtualMachine.java eventually. Please fix BsdVirtualMachine.java at the same time. Yeah, I'd love to, but it's still in the osx-port repo I keep forgetting that the JDK-side is still separate and that HSX is merged... one day we'll all be in the same place... and this fixes for jdk8 (and then back port to jdk7). I'll file a separate bug for BsdVirtualMachine.java and fix that, too. Sounds like a good plan. Dan
Re: RFR: 7133124 Remove redundant packages from JAR command line
On Jan 30, 2012, at 10:02 AM, Jonathan Gibbons wrote: On 01/30/2012 09:41 AM, Robert Ottenhag wrote: A push to a shared repo without verifying it builds on all supported platforms is risky behavior, and one that can consume needless resources finding out it doesn't build, and more importantly waste your co-worker's time undoing it. We have the ability to prevent that, and we should. I totally agree. Passing suitable build and test requirements (check-in testing) is crucial (having 100+ developers waiting for a build fix is a_bad_ thing). We need to be careful here. We have the ability to totally overload any reasonable build system, and going through a full build and test for what are sometimes small changes may be impractical (having 100+ developers waiting for the build queue is also a _bad_ thing.) As Joe Darcy often points out, we need to be careful that we are not so afraid of doing anything bad that we prevent anything good from happening. Note, I am /not/ saying we don't need a good build and test system. We do. We just need to be careful how we mandate its use. -- Jon Totally agree Jon, it's a balancing act, we do what makes sense and what gives us the best chance of keeping poison or broken changesets from getting into circulation. We cannot run all the tests all the time. I also think that extremely low or no risk changes need not follow this rule, but the problem is getting people to agree with 'no risk changes' are. I've seen enough 'low risk' changes bring the house down that I'm on the paranoid side. :^( -kto
Re: RFR: 7133124 Remove redundant packages from JAR command line
On 01/30/2012 03:41 PM, Kelly O'Hair wrote: I also think that extremely low or no risk changes need not follow this rule, but the problem is getting people to agree with 'no risk changes' are. I've seen enough 'low risk' changes bring the house down that I'm on the paranoid side. :^( How does the saying go: Just because you're paranoid doesn't mean that folk won't break the build? ;-) -- Jon
Re: Request for Review: 7132199: sun/management/jmxremote/bootstrap/JvmstatCountersTest.java failing on all platforms
On 31/01/2012 3:28 AM, Dmitry Samersoff wrote: On 2012-01-30 16:28, Staffan Larsen wrote: 2. If you decide to hardcode /tmp please, create a global constant for it. I don't agree that this would make the code easier to read or maintain. I should, however, include a comment saying that the file is always in /tmp regardless of the value of java.io.tmpdir. Staffan: I still think changing the static field tmpdir to refer to /tmp is cleaner then putting /tmp in all the use-sites. /tmp is common but not mandatory, especially if we speak about embedded systems. Dmitry: The point is that the VM will always put the file in /tmp. That's wrong but the issue here is making the management Java code match the hotspot code. Native code should use P_tmpdir constant from stdio.h rather than hardcode /tmp. As we can't access it from java I recommend to create a global constant somewhere to reduce possible future porting efforts. Changing the tmpdir static would be a smaller fix, but all the cwd code would then remain. Yes, HotSpot never writes to cwd. I agree with Staffan, that looks for socket/door in cwd should be removed. Ok, if it is never needed then remove it. David -Dmitry
Re: Patch to fix build breakage with GCC 4.7
Hi Deepak, The primary change here is a build change so I've cc'ed build-dev. The majority of the changes are to JVMTI demo files hence I've cc'd serviceability-dev. I think JDK8-dev doesn't need to be included now so I've bcc'd it. While gcc compilation on sparc is rare I'm not sure that simply deleting the sparc-only option unconditionally is the right thing to do. David On 31/01/2012 1:20 AM, Deepak Bhole wrote: Hi, JDK builds currently fail with GCC 4.7 due to its stricter option checking. GCC 4.6 and prior ignored invalid options -- GCC 4.7 does not. Certain files in JDK supply the -mimpure-text option to GCC. This option is only valid on SPARC[1,2]. As a result, GCC 4.7 throws an error during build on Linux (I suppose . This patch removes the option: http://cr.openjdk.java.net/~dbhole/GCC-4.7-JDK8.00 1: http://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/SPARC-Options.html 2: http://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/i386-and-x86_002d64-Options.html If OK for push, please feel free to do so (I don't have commit access). Cheers, Deepak
Re: RFR: 7133124 Remove redundant packages from JAR command line
On Jan 30, 2012, at 9:41 AM, Robert Ottenhag wrote: Inline, -Original Message- From: Kelly O'Hair Sent: Monday, January 30, 2012 6:27 PM To: Robert Ottenhag Cc: Dmitry Samersoff; serviceability-dev@openjdk.java.net; John Coomes; build-...@openjdk.java.net Subject: Re: RFR: 7133124 Remove redundant packages from JAR command line Diverged and in transit to another planet. :^( A push to a shared repo without verifying it builds on all supported platforms is risky behavior, and one that can consume needless resources finding out it doesn't build, and more importantly waste your co-worker's time undoing it. We have the ability to prevent that, and we should. I totally agree. Passing suitable build and test requirements (check-in testing) is crucial (having 100+ developers waiting for a build fix is a _bad_ thing). And the JDK is not just a VM, and cross compilation does not address Windows or Solaris. It does not, but it can limit the number of build platforms that need to be maintained. See also my suggestion on an open JRPT queue and preconfigured virtual machines for build and test. Windows and Linux should cross build 32_to_64bit and 64_to_32bit, and Solaris can be helped by a virtual box installation of Solaris/{i586,x64}. 32 to 64 may be a problem. As much as I have tried to keep the build just a build, we continue to run what we build in order to create the build. This will happen more going forward unfortunately, impacting our ability to cross compile on platforms that can't run what you are building. The current thinking is that an initial build for the host would be done first, and then that would be used to create the cross compiled build. But that's all up in the air, depends on when the jigsaw and jdk modularization work starts showing up in jdk8. In general Solaris has always been an x64 system where you build both. There really is no Solaris 32bit systems used anymore. I'd like to change it so that there is just one pass, and both 32bit and 64bit native pieces are created at the same time. Dito for Linux and Windows, e.g. you just build Windows, or Linux, or Solaris. But if you are limited, i.e. running on a 32bit system, you only build 32bit. I'm of the opinion that most desktop systems will be 64bit, or should be. This is a much more difficult situation than most people think it is. Yes we can change things to simplify it, make it better, but don't kid yourself, this is more complex than most people think. True, but aiming for simpler and better is always beneficial. Simpler is better, I agree. Anyone that hasn't built the entire JDK on Windows, successfully, should try it. I have ;-) I did not know you had the jdk windows merit badge, congrads, or is it 'my sympathies'. ;^) -kto /Robert -kto On Jan 30, 2012, at 2:00 AM, Robert Ottenhag wrote: Dmitry, I think this discussion diverged somewhat from the original topic, but I do agree with you that we must also attack the problem on a process level. With the model you propose (and also the existing model) I would also like to stress the need for continuous and automatic builds triggered by incoming new changes compared to the last working change. Having that, it is possible to update labels (tags) for last_clean_build, last_nightly_build, etc. That way any build breakage would only be visible at the tip. However, when submitting a new change care should be taken to do it against a working tip, that it builds and tests correctly (personal check- in testing). Actually, this is close to the model we had for the JRockit source base. WLS also uses the model of sliding labels for last_clean_build where developers most often only do partial builds themselves. Regarding external committers, I think they need both the option of building and testing locally, as well as access to an Open JDK specific queue for build and test submissions. By making use of cross compilation and open tools it is possible to at least verify that the product builds locally. Even better is to also supply preconfigured VMs with the necessary standard build and test environment (e.g. obsolete Solaris or Linux distros that we require). In general, having a build and test setup that is automatically configurable (including Windows) will help both internal and external developers (see also the build-infra project). /Robert On 01/30/2012 10:09 AM, Dmitry Samersoff wrote: John, Actually the goal of my letter is not to promote new integration scheme. Just to remind that we need to put some efforts to internal process review and optimization. But, see answers below (inline): Integration method I mentioned often used in open source projects, because it doesn't require any special infrastructure for external commiters. The only necessary thing to do safe commit is a write access to integration (-gate) workspace. On 2012-01-30 06:35,