On 01/ 5/12 08:30 AM, Alan Bateman wrote:
On 23/12/2011 16:19, Gary Adams wrote:
The LargeFileAvailable regression test had intermittent failures
when there was not sufficient space available to create
a 7G temp file. This webrev presents a simple check to
see if the available usable space is less than 7G and
scales the test back accordingly.
The original bug report suggests that the test be switched
to use the current working directory rather than a temp
file. I think that could be the wrong choice for an embedded
system that might have the tests mounted from a remote
file system. In that scenario, using the local temp file
space provides a better solution for what this test is designed
to check.
http://cr.openjdk.java.net/~gadams/7030573/
The only thing is that when the test is scaled back too much then it no longer
tests the original issue. This test will create a sparse file on file systems
that support it and I suspect the reason it fails on Solaris is that tmp is
backed by swap. It might be better if we changed the test to create the file
in the current directory (or a sub-directory of). It will be removed by jtreg
if the test doesn't delete it.
-Alan
I've updated the webrev with the temp file created in the current directory.
I'm not sure what to do about the case if there is only a little space available
only a small file will be used. Should the test fail and force the test operator
to create a new test environment where 7G space is available?
I lean toward allowing the test to pass using the space that is available.