[ 
https://issues.apache.org/jira/browse/COMPRESS-618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17524315#comment-17524315
 ] 

Luís Filipe Nassif commented on COMPRESS-618:
---------------------------------------------

Please also check this: [https://github.com/sepinf-inc/IPED/issues/1068]

 

Synchronization on ZipFile.getInputStream(ze) only works with splitted archives 
and ignoreLocalFileHeader == true if the ZipSplitReadOnlySeekableByteChannel 
passed to constructor is used as lock, otherwise reading the returned input 
streams concurrently with other calls to getInputStream() results in 
concurrency issues.

> Make ZipFile thread safe
> ------------------------
>
>                 Key: COMPRESS-618
>                 URL: https://issues.apache.org/jira/browse/COMPRESS-618
>             Project: Commons Compress
>          Issue Type: Improvement
>          Components: Archivers
>    Affects Versions: 1.21
>            Reporter: Luís Filipe Nassif
>            Priority: Major
>
> Sorry if another issue already exists for this, I couldn't find one. When I 
> enable the ignoreLocalFileHeader constructor flag, I started to get 
> BufferOverflowException/BufferUnderFlowException while calling 
> ZipFile.getInputStream(ZipArchiveEntry). Looking at the code, there is some 
> local instances of byte[] and ByteBuffer reused by some private methods that 
> should be the cause of those exceptions. Everything seems fine if 
> ignoreLocalFileHeader is false, but I need to open very large zip files as 
> fast as possible. If I synchronize on ZipFile.getInputStream(ze), it seems to 
> workaround. Actually, I don't know if ZipFile was designed to be thread safe, 
> but it would be nice to be. I also saw some synchronization code in 
> BoundedSeekableByteChannelInputStream class, so not sure if this issue is an 
> enhancement or bug report...



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to