[ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680072#comment-17680072
 ] 

Gary D. Gregory edited comment on COMPRESS-639 at 1/24/23 1:54 AM:
-------------------------------------------------------------------

What looks worrisome is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

If Implement the contract in the standard style (using all fields for both 
methods), the test passes sometimes. Needs more study...


was (Author: garydgregory):
What looks worrisome is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

If Implement the contract in the standard style (using all fields for both 
methods), the test passes. Needs more study...

> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> ---------------------------------------------------------------------------------
>
>                 Key: COMPRESS-639
>                 URL: https://issues.apache.org/jira/browse/COMPRESS-639
>             Project: Commons Compress
>          Issue Type: New Feature
>          Components: Archivers, Compressors
>    Affects Versions: 1.21, 1.22
>         Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>            Reporter: Andrew Gawron
>            Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>    public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>        var testOutputStream = new ByteArrayOutputStream();
>        String fileContent = "A";
>        final int NUM_OF_FILES = 100;
>        var inputStreams = new LinkedList<InputStream>();
>        for (int i = 0; i < NUM_OF_FILES; i++) {
>            inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>        }
>        var zipCreator = new ParallelScatterZipCreator();
>        var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>        zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>        for (int i = 0; i < inputStreams.size(); i++) {
>            ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>            zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>            final var inputStream = inputStreams.get(i);
>            zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>        }
>        zipCreator.writeTo(zipArchiveOutputStream);
>        zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>    }  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.mockito.internal.runners.DefaultInternalRunner$1$1.evaluate(DefaultInternalRunner.java:55)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.mockito.internal.runners.DefaultInternalRunner$1.run(DefaultInternalRunner.java:100)
>  at 
> org.mockito.internal.runners.DefaultInternalRunner.run(DefaultInternalRunner.java:107)
>  at org.mockito.internal.runners.StrictRunner.run(StrictRunner.java:41) at 
> org.mockito.junit.MockitoJUnitRunner.run(MockitoJUnitRunner.java:163) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
>  at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>  at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
>  at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54) */ {code}
> Workaround:
> Add a unique comment for each file so it will make the entry always unique  
> (ZipArchiveEntry#addComment)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to