DJ Lucas wrote:

>> The number of hotspot and langtools failures are the same as DJ's, but I
>> got 136 failures in the jdk where DJ only got 78.  In particular, I got
>> 28 failures in com/sun/jdi/ where DJ got none.

> Hmm...JDI is the debug interface. I don't have an immediate explanation 
> for that. Do you have traditional debugging tools available in the 
> environment? They should not have an effect, but at the time I tested, I 
> did not have them available.

What tools wolud that be?  I do not have gdb installed.  I did have all 
the 1.9.7 listed dependencies:  alsa-lib-1.0.25, Cups-1.5.0, 
gtk+-2.24.10, Xorg Libraries, xulrunner, apache-ant-1.8.3, UnZip-6.0, 
which-2.20, and Zip-3.0.  cpio is required too.

BTW, I think apache-ant is required, not optional.  At least I didn't 
see a way around it.  There is a circular dependency there.  My order was:

Tue May 22 11:21:21 2012 /usr/src/junit/junit4.10.zip
Tue May 22 11:24:56 2012 /usr/src/apache-ant/apache-ant-1.8.3-src.tar.bz2
Tue May 22 11:33:17 2012 /usr/src/cpio/cpio-2.11.tar.bz2
Tue May 22 11:38:15 2012 /usr/src/giflib/giflib-4.1.6.tar.bz2
Tue May 22 11:45:30 2012 /usr/src/cups/cups-1.5.0-source.tar.bz2
Tue May 22 20:09:25 2012 /usr/src/nspr/nspr-4.9.tar.gz
Tue May 22 20:17:32 2012 /usr/src/nss/nss-3.13.4.tar.gz
Wed May 23 02:13:58 2012 /usr/src/icedtea/icedtea-2.1.tar.gz

Looking at configure, there seem to be several dependencies we don't 
mention:  gcj, openssl, zlib, jpeg, gif, lcms, gtk, gio, fonconfig, but 
some of those are installed previously for xorg, etc,


>> I also got 29 failures
>> in java/awt/ where DJ got 12.
> 
> This could be due to the window manager. I always sit and watch it 
> through the tests. There were a couple where my login window was to 
> blame when it was trying to iconify and restore windows. Also, root or 
> unprivileged user? My results were obtained as an unprivileged user, 
> using TWM on a fairly minimal system (just enough to meet required and 
> recommended deps).

I was using the xcfe wm and was running as non-root.  I did watch a lot 
of the tests, but I didn't see what I could have done to change the 
outcome of any particular test.

> On a side note, 2.2 is scheduled for release next Wednesday (5/30). I 
> don't anticipate any changes in the options or dependencies. 2.2 will 
> basically be the same as the Oracle 7u4 with some of the security 
> updates that are slotted for 7u5, along with the typical build fixes for 
> newer system software and the closed parts of the software replaced by 
> free software (easily arguable as _better_ replacements (pulse for the 
> old sgi audio for example)). Basically, just disregard the upstream 
> fixes patch and build as before. Existing 2.1 builds should work fine as 
> a bootstrap compiler, however, we'll be pulling the downloads from git 
> again (make download if you have wget installed, the all target will 
> also get them for you in one step).

I think we would be better off to wait then.

>> The open question is how we should address the test failures in the book.

> Unfortunately, until I hear back from the other maintainers on their 
> test suite results, I really have nothing to go on. I know one of the 
> core ITW developers had mentioned that he was working on a public repo 
> for test results in private message, but as I understand it, he has 
> about as much free time as I do lately. :-) 

A public results repo similar to what gcc has would be good.

> Perhaps I should just 
> install Fedora or Debian and build from their respective sources to make 
> a meaningful comparison. In the grand scheme of things, the numbers are 
> small in comparison to the number of tests, and not that different from 
> what is in the book now which was deemed OK previously (by me using 
> comparison with the other distros)...at least once the mystery is solved 
> about the JDI tests. 

I'm not sure I agree that a 3% error rate for the jdk portion of the 
tests is good.  I will note that over the years that gcc has gotten 
better and better about test results.

> Maybe we should consider dropping the -samevm flag

How is that done?

> as one failure could conceivably cause a whole group of tests to fail if 
> the running environment gets trashed. The downside to that is that a new 
> VM is created for each test, which has a rather obvious effect on the 
> wall clock.

   -- Bruce
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-dev
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to