Hi Jaikiran

On Jun 30, 2021, at 12:15 PM, Jaikiran Pai 
<jai.forums2...@gmail.com<mailto:jai.forums2...@gmail.com>> wrote:

Hello Lance,

On 29/06/21 11:31 pm, Lance Andersen wrote:

I ran your current test 150 times and the max runtime was 25 seconds, most 
cases were in the 18-20 second range on our slower test boxes.

Thank you for running those tests. Do you think those timings are good enough 
to let that test stay as a regular automated jtreg test, in tier1? I'm guessing 
this falls in tier1? I haven't yet looked in detail the tier definitions of the 
build.

These tests run as part of tier2.

The time for the test run is reasonable .


As part of looking at what happens with a file whose deflated size is > 2gb, I 
would add a specific test which is a manual test to validate that there is no 
issue when we cross the 2gb threshold.

I added a (manual) test to see what happens in this case. I have committed the 
test as part of this PR just for the sake of reference. The test is named 
LargeCompressedEntrySizeTest. The test uses ZipFS to create a (new) zip file 
and attempts to write out a zip entry whose deflated/compressed size is 
potentially greater than 2gb. When I run this test case, I consistenly run into 
the following exception:

test LargeCompressedEntrySizeTest.testLargeCompressedSizeWithZipFS(): failure
java.lang.OutOfMemoryError: Required array length 2147483639 + 419 is too large
    at 
java.base/jdk.internal.util.ArraysSupport.hugeLength(ArraysSupport.java:649)
    at 
java.base/jdk.internal.util.ArraysSupport.newLength(ArraysSupport.java:642)
    at 
java.base/java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:100)
    at 
java.base/java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:130)
    at 
java.base/java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:252)
    at 
java.base/java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:210)
    at 
jdk.zipfs/jdk.nio.zipfs.ZipFileSystem$DeflatingEntryOutputStream.write(ZipFileSystem.java:2016)
    at java.base/java.io.FilterOutputStream.write(FilterOutputStream.java:108)
    at 
LargeCompressedEntrySizeTest.testLargeCompressedSizeWithZipFS(LargeCompressedEntrySizeTest.java:104)

which to me is understandable. Is this what you and Alan wanted tested/checked? 
In its current form I don't see a way to write out a entry whose deflated size 
exceeds 2gb, unless the user/caller use the "useTempFile=true" option while 
creating the zip filesystem. FWIW - if I do set this "useTempFile=true" while 
creating that zip filesystem, in the LargeCompressedEntrySizeTest, that test 
passes fine and the underlying zip that is created shows a compressed/deflated 
size as follows:

unzip -lv JTwork/scratch/8190753-test-compressed-size.zip
Archive:  JTwork/scratch/8190753-test-compressed-size.zip
 Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
--------  ------  ------- ---- ---------- ----- --------  ----
2147483649  Defl:N 2148138719   0% 06-30-2021 21:39 52cab9f8 LargeZipEntry.txt
--------          -------  ---                            -------
2147483649         2148138719   0%                            1 file


I understand that Alan's suggestion holds good and we should have some logic in 
place which switches to using a temp file once we notice that the sizes we are 
dealing with can exceed some threshold, but I guess that is something we need 
to do separately outside of this PR?

Yes the intent would be to add some logic, which might need to be under a 
property (for now) to specify the size for when to use  a temp file vs BAOS.  
Having the value configurable via a property might give us some flexibility for 
experimentation.

I don’t see why this PR could not be used for this (as it would provide a more 
complete solution)

Best
Lance

-Jaikiran



[cid:E1C4E2F0-ECD0-4C9D-ADB4-B16CA7BCB7FC@home]

Lance Andersen| Principal Member of Technical Staff | +1.781.442.2037
Oracle Java Engineering
1 Network Drive
Burlington, MA 01803
lance.ander...@oracle.com<mailto:lance.ander...@oracle.com>



Reply via email to