[jira] [Commented] (COMPRESS-199) Introduction of XZ breaks OSGi support

2012-12-19 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13536662#comment-13536662
 ] 

Jukka Zitting commented on COMPRESS-199:


The following patch implements an alternative solution by marking the xz 
dependency as optional when working with OSGi.

{code}
diff --git a/pom.xml b/pom.xml
index 110e618..f51d628 100644
--- a/pom.xml
+++ b/pom.xml
@@ -151,6 +151,15 @@ These include: bzip2, gzip, pack200, xz and ar, cpio, jar,
   /archive
 /configuration
   /plugin
+  plugin
+groupIdorg.apache.felix/groupId
+artifactIdmaven-bundle-plugin/artifactId
+configuration
+  instructions
+Import-Packageorg.tukaani.xz;resolution:=optional/Import-Package
+  /instructions
+/configuration
+  /plugin
 /plugins
   /build
{code}

 Introduction of XZ breaks OSGi support
 --

 Key: COMPRESS-199
 URL: https://issues.apache.org/jira/browse/COMPRESS-199
 Project: Commons Compress
  Issue Type: Bug
  Components: Compressors
Affects Versions: 1.4.1
 Environment: Windows Vista  RHEL 6.2, Java 1.6.0_33, Equinox  
 org.eclipse.osgi_3.7.2.v20120110-1415.jar.
Reporter: Niklas Gertoft
  Labels: osgi

 The introduction of XZ seems to break the OSGi support for the compress 
 bundle.
 The XZ component doesn't seem to be included or referred to (dependency).
 !ENTRY org.apache.commons.compress 4 0 2012-08-20 17:06:19.339
 !MESSAGE FrameworkEvent ERROR
 !STACK 0
 org.osgi.framework.BundleException: The bundle 
 org.apache.commons.compress_1.4.1 [20] could not be resolved. Reason: 
 Missing Constraint: Import-Package: org.tukaani.xz; version=0.0.0
 at 
 org.eclipse.osgi.framework.internal.core.AbstractBundle.getResolverError(AbstractBundle.java:1327)
 at 
 org.eclipse.osgi.framework.internal.core.AbstractBundle.getResolutionFailureException(AbstractBundle.java:1311)
 at 
 org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:323)
 at 
 org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:389)
 at 
 org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1131)
 at 
 org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
 at 
 org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
 at 
 org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
 at 
 org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
 at 
 org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
 at 
 org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
 at 
 org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
 at 
 org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
 Included in my project by maven and
 dependency
   groupIdorg.apache.commons/groupId
   artifactIdcommons-compress/artifactId
   version1.4.1/version
 /dependency

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (COMPRESS-197) Tar file for Android backup cannot be read

2012-08-05 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved COMPRESS-197.


   Resolution: Fixed
Fix Version/s: 1.5
 Assignee: Jukka Zitting

The attached file contains a field with three trailing nul bytes, while Commons 
Compress only accepted one or two. In revision 1369655 relaxed that constraint 
to allow any number of trailing nuls or spaces.

 Tar file for Android backup cannot be read
 --

 Key: COMPRESS-197
 URL: https://issues.apache.org/jira/browse/COMPRESS-197
 Project: Commons Compress
  Issue Type: Bug
  Components: Archivers
Affects Versions: 1.4.1
Reporter: Trejkaz
Assignee: Jukka Zitting
  Labels: tar
 Fix For: 1.5

 Attachments: android-backup.tar


 Attached tar file was generated by some kind of backup tool on Android. 
 Normal tar utilities seem to handle it fine, but Commons Compress doesn't.
 {noformat}
 java.lang.IllegalArgumentException: Invalid byte 0 at offset 5 in 
 '01750{NUL}{NUL}{NUL}' len=8
 at 
 org.apache.commons.compress.archivers.tar.TarUtils.parseOctal(TarUtils.java:99)
 at 
 org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:788)
 at 
 org.apache.commons.compress.archivers.tar.TarArchiveEntry.init(TarArchiveEntry.java:308)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (COMPRESS-174) BZip2CompressorInputStream doesn't handle being given a wrong-format compressed file

2012-08-05 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved COMPRESS-174.


Resolution: Not A Problem

Resolving as Not A Problem based on the existing CompressorStreamFactory 
functionality mentioned above by Sebb.

 BZip2CompressorInputStream doesn't handle being given a wrong-format 
 compressed file
 

 Key: COMPRESS-174
 URL: https://issues.apache.org/jira/browse/COMPRESS-174
 Project: Commons Compress
  Issue Type: Bug
  Components: Compressors
Affects Versions: 1.3
 Environment: Linux and Windows
Reporter: Andrew Pavlin
Priority: Minor

 When reading a file through BZip2CompressorInputStream, and the user selects 
 a file of the wrong type (such as ZIP or GZIP), the read blows up with a 
 strange ArrayIndexOutOfBoundException, instead of reporting immediately that 
 the input data is of the wrong format.
 The Bzip2Compressor should be able to identify whether a stream is of BZip2 
 format or not, and immediately reject it with a meaningful exception 
 (example: ProtocolException: not a BZip2 compressed file).
 Alternatively, are there functions in commons-compress that can identify the 
 compression type of an InputStream by inspection?
 Example stack trace when using a ZIP input file:
 Exception in thread OSM Decompressor 
 java.lang.ArrayIndexOutOfBoundsException: 90 
 at 
 org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.getAndMoveToFrontDecode(BZip2CompressorInputStream.java:688)
  
 at 
 org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.initBlock(BZip2CompressorInputStream.java:322)
  
 at 
 org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.setupNoRandPartA(BZip2CompressorInputStream.java:880)
  
 at 
 org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.setupNoRandPartB(BZip2CompressorInputStream.java:936)
  
 at 
 org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.read0(BZip2CompressorInputStream.java:228)
  
 at 
 org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream.read(BZip2CompressorInputStream.java:180)
  
 at java.io.InputStream.read(InputStream.java:82) 
 at 
 org.ka2ddo.yaac.osm.OsmXmlSegmenter$1.run(OsmXmlSegmenter.java:129) 
 at java.lang.Thread.run(Thread.java:680) 
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (COMPRESS-92) ZipFile.getEntries() should be generified.

2012-08-05 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved COMPRESS-92.
---

   Resolution: Duplicate
Fix Version/s: (was: 2.0)

This was already done in 1.3, I think.

 ZipFile.getEntries() should be generified.
 --

 Key: COMPRESS-92
 URL: https://issues.apache.org/jira/browse/COMPRESS-92
 Project: Commons Compress
  Issue Type: Improvement
  Components: Archivers
Affects Versions: 1.0
Reporter: Sean Cote
Priority: Minor

 Right now, ZipFile.getEntries() simply returns Enumeration, but it should 
 return Enumeration? extends ZipArchiveEntry so that callers don't have to 
 cast the results, much like java.util.zip.ZipFile.entries().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (COMPRESS-191) Too relaxed tar detection in ArchiveStreamFactory

2012-06-30 Thread Jukka Zitting (JIRA)
Jukka Zitting created COMPRESS-191:
--

 Summary: Too relaxed tar detection in ArchiveStreamFactory
 Key: COMPRESS-191
 URL: https://issues.apache.org/jira/browse/COMPRESS-191
 Project: Commons Compress
  Issue Type: Improvement
  Components: Archivers
Affects Versions: 1.4.1, 1.4, 1.3, 1.2
Reporter: Jukka Zitting
Priority: Minor


The relaxed tar detection logic added in COMPRESS-177 unfortunately matches
also some non-tar files like a [test AIFF 
file|https://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/test/resources/test-documents/testAIFF.aif]
 that Apache Tika uses. It would be good to improve the detection heuristics to 
still match files like the one in COMPRESS-177 but avoid false positives like 
the AIFF file in Tika.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (COMPRESS-191) Too relaxed tar detection in ArchiveStreamFactory

2012-06-30 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-191:
---

Attachment: 0001-COMPRESS-191-Too-relaxed-tar-detection-in-ArchiveStr.patch

The attached patch adds heuristics for verifying the tar header checksum, and 
uses that mechanism to better avoid false positives in ArchiveStreamFactory.

 Too relaxed tar detection in ArchiveStreamFactory
 -

 Key: COMPRESS-191
 URL: https://issues.apache.org/jira/browse/COMPRESS-191
 Project: Commons Compress
  Issue Type: Improvement
  Components: Archivers
Affects Versions: 1.2, 1.3, 1.4, 1.4.1
Reporter: Jukka Zitting
Priority: Minor
  Labels: tar
 Attachments: 
 0001-COMPRESS-191-Too-relaxed-tar-detection-in-ArchiveStr.patch


 The relaxed tar detection logic added in COMPRESS-177 unfortunately matches
 also some non-tar files like a [test AIFF 
 file|https://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/test/resources/test-documents/testAIFF.aif]
  that Apache Tika uses. It would be good to improve the detection heuristics 
 to still match files like the one in COMPRESS-177 but avoid false positives 
 like the AIFF file in Tika.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (COMPRESS-191) Too relaxed tar detection in ArchiveStreamFactory

2012-06-30 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-191:
---

Description: The relaxed tar detection logic added in COMPRESS-177 
unfortunately matches also some non-tar files like a [test AIFF 
file|https://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/test/resources/test-documents/testAIFF.aif]
 that Apache Tika uses. It would be good to improve the detection heuristics to 
still match files like the one in COMPRESS-177 but avoid false positives like 
the AIFF file in Tika.  (was: The relaxed tar detection logic added in 
COMPRESS-177 unfortunately matches
also some non-tar files like a [test AIFF 
file|https://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/test/resources/test-documents/testAIFF.aif]
 that Apache Tika uses. It would be good to improve the detection heuristics to 
still match files like the one in COMPRESS-177 but avoid false positives like 
the AIFF file in Tika.)
 Issue Type: Bug  (was: Improvement)

 Too relaxed tar detection in ArchiveStreamFactory
 -

 Key: COMPRESS-191
 URL: https://issues.apache.org/jira/browse/COMPRESS-191
 Project: Commons Compress
  Issue Type: Bug
  Components: Archivers
Affects Versions: 1.2, 1.3, 1.4, 1.4.1
Reporter: Jukka Zitting
Priority: Minor
  Labels: tar
 Attachments: 
 0001-COMPRESS-191-Too-relaxed-tar-detection-in-ArchiveStr.patch


 The relaxed tar detection logic added in COMPRESS-177 unfortunately matches 
 also some non-tar files like a [test AIFF 
 file|https://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/test/resources/test-documents/testAIFF.aif]
  that Apache Tika uses. It would be good to improve the detection heuristics 
 to still match files like the one in COMPRESS-177 but avoid false positives 
 like the AIFF file in Tika.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (COMPRESS-191) Too relaxed tar detection in ArchiveStreamFactory

2012-06-30 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-191:
---

Description: The relaxed tar detection logic added in COMPRESS-117 
unfortunately matches also some non-tar files like a [test AIFF 
file|https://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/test/resources/test-documents/testAIFF.aif]
 that Apache Tika uses. It would be good to improve the detection heuristics to 
still match files like the one in COMPRESS-117 but avoid false positives like 
the AIFF file in Tika.  (was: The relaxed tar detection logic added in 
COMPRESS-177 unfortunately matches also some non-tar files like a [test AIFF 
file|https://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/test/resources/test-documents/testAIFF.aif]
 that Apache Tika uses. It would be good to improve the detection heuristics to 
still match files like the one in COMPRESS-177 but avoid false positives like 
the AIFF file in Tika.)

 Too relaxed tar detection in ArchiveStreamFactory
 -

 Key: COMPRESS-191
 URL: https://issues.apache.org/jira/browse/COMPRESS-191
 Project: Commons Compress
  Issue Type: Bug
  Components: Archivers
Affects Versions: 1.2, 1.3, 1.4, 1.4.1
Reporter: Jukka Zitting
Priority: Minor
  Labels: tar
 Attachments: 
 0001-COMPRESS-191-Too-relaxed-tar-detection-in-ArchiveStr.patch


 The relaxed tar detection logic added in COMPRESS-117 unfortunately matches 
 also some non-tar files like a [test AIFF 
 file|https://svn.apache.org/repos/asf/tika/trunk/tika-parsers/src/test/resources/test-documents/testAIFF.aif]
  that Apache Tika uses. It would be good to improve the detection heuristics 
 to still match files like the one in COMPRESS-117 but avoid false positives 
 like the AIFF file in Tika.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (COMPRESS-192) Allow setting of the zip encoding in ArchiveStreamFactory

2012-06-30 Thread Jukka Zitting (JIRA)
Jukka Zitting created COMPRESS-192:
--

 Summary: Allow setting of the zip encoding in ArchiveStreamFactory
 Key: COMPRESS-192
 URL: https://issues.apache.org/jira/browse/COMPRESS-192
 Project: Commons Compress
  Issue Type: Improvement
  Components: Archivers
Reporter: Jukka Zitting
Priority: Minor


When using the ArchiveStreamFactory it's currently not possible to control the 
encoding used by zip archive streams. Having that feature available in 
ArchiveStreamFactory would be useful for TIKA-936.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (COMPRESS-192) Allow setting of the zip encoding in ArchiveStreamFactory

2012-06-30 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-192:
---

Attachment: 0001-COMPRESS-192-Allow-setting-of-the-zip-encoding-in-Ar.patch

The attached patch adds get/setZipEncoding methods to ArchiveStreamFactory and 
uses them to control the encoding used in zip streams.

 Allow setting of the zip encoding in ArchiveStreamFactory
 -

 Key: COMPRESS-192
 URL: https://issues.apache.org/jira/browse/COMPRESS-192
 Project: Commons Compress
  Issue Type: Improvement
  Components: Archivers
Reporter: Jukka Zitting
Priority: Minor
  Labels: encoding, zip
 Attachments: 
 0001-COMPRESS-192-Allow-setting-of-the-zip-encoding-in-Ar.patch


 When using the ArchiveStreamFactory it's currently not possible to control 
 the encoding used by zip archive streams. Having that feature available in 
 ArchiveStreamFactory would be useful for TIKA-936.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (COMPRESS-190) Git settings for commons-compress

2012-06-29 Thread Jukka Zitting (JIRA)
Jukka Zitting created COMPRESS-190:
--

 Summary: Git settings for commons-compress
 Key: COMPRESS-190
 URL: https://issues.apache.org/jira/browse/COMPRESS-190
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Jukka Zitting
Priority: Trivial


For people using the commons-compress.git mirror it would be useful to have 
basic {{.gitignore}} and {{.gitattributes}} files present at the root of the 
Commons Compress checkout.

The following settings work fine for me on a Windows/Eclipse environment:

{code:none}
$ cat .gitignore
target
.project
.classpath
.settings
$ cat .gitattributes
src/test/resources/longfile_gnu.ar  eol=lf
src/test/resources/longfile_bsd.ar  eol=lf
src/test/resources/longpath/minotaur.ar eol=lf
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (COMPRESS-190) Git settings for commons-compress

2012-06-29 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-190:
---

Attachment: 0001-COMPRESS-190-Git-settings-for-commons-compress.patch

Patch attached.

 Git settings for commons-compress
 -

 Key: COMPRESS-190
 URL: https://issues.apache.org/jira/browse/COMPRESS-190
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Jukka Zitting
Priority: Trivial
  Labels: git
 Attachments: 0001-COMPRESS-190-Git-settings-for-commons-compress.patch


 For people using the commons-compress.git mirror it would be useful to have 
 basic {{.gitignore}} and {{.gitattributes}} files present at the root of the 
 Commons Compress checkout.
 The following settings work fine for me on a Windows/Eclipse environment:
 {code:none}
 $ cat .gitignore
 target
 .project
 .classpath
 .settings
 $ cat .gitattributes
 src/test/resources/longfile_gnu.ar  eol=lf
 src/test/resources/longfile_bsd.ar  eol=lf
 src/test/resources/longpath/minotaur.ar eol=lf
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (COMPRESS-72) Move acknowledgements from NOTICE to README

2010-05-05 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-72?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-72:
--

Fix Version/s: (was: 1.1)

Not necessary to resolve this in time for 1.1.

 Move acknowledgements from NOTICE to README
 ---

 Key: COMPRESS-72
 URL: https://issues.apache.org/jira/browse/COMPRESS-72
 Project: Commons Compress
  Issue Type: Improvement
Affects Versions: 1.0
Reporter: Jukka Zitting
Priority: Minor

 The NOTICE.txt file in commons-compress contains the following entries:
 {noformat}
 Original BZip2 classes contributed by Keiron Liddle
 kei...@aftexsw.com, Aftex Software to the Apache Ant project
 Original Tar classes from contributors of the Apache Ant project
 Original Zip classes from contributors of the Apache Ant project
 Original CPIO classes contributed by Markus Kuss and the jRPM project
 (jrpm.sourceforge.net)
 {noformat}
 It's good that we acknowledge contributions, but having those entries in the 
 NOTICE file is not appropriate unless the licensing of the original 
 contributions explicitly required such attribution notices.
 I suggest that we move these entries to a README.txt file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COMPRESS-18) JarArchiveEntry does not populate manifestAttributes or certificates

2010-05-05 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12864193#action_12864193
 ] 

Jukka Zitting commented on COMPRESS-18:
---

Is there anything else we need to do for this? I'd be happy to help if needed.

 JarArchiveEntry does not populate manifestAttributes or certificates
 

 Key: COMPRESS-18
 URL: https://issues.apache.org/jira/browse/COMPRESS-18
 Project: Commons Compress
  Issue Type: Bug
Reporter: Sebb
Priority: Minor
 Fix For: 1.1

 Attachments: compress-18.patch


 JarArchiveEntry does not populate manifestAttributes or certificates - they 
 are both always null.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COMPRESS-18) JarArchiveEntry does not populate manifestAttributes or certificates

2010-04-14 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12856846#action_12856846
 ] 

Jukka Zitting commented on COMPRESS-18:
---

What's the benefit of the proposed change? It doesn't seem to address the topic 
of this issue.

 JarArchiveEntry does not populate manifestAttributes or certificates
 

 Key: COMPRESS-18
 URL: https://issues.apache.org/jira/browse/COMPRESS-18
 Project: Commons Compress
  Issue Type: Bug
Reporter: Sebb
Priority: Minor
 Fix For: 1.1

 Attachments: compress-18.patch


 JarArchiveEntry does not populate manifestAttributes or certificates - they 
 are both always null.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (IO-242) Pre- and post-processing support for ProxyReader/Writer

2010-04-14 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved IO-242.
--

Resolution: Fixed

Patch committed in revision 933964.

 Pre- and post-processing support for ProxyReader/Writer
 ---

 Key: IO-242
 URL: https://issues.apache.org/jira/browse/IO-242
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 2.0

 Attachments: IO-242.patch


 In IO-211 we added protected before/after methods for all read and write 
 operations in ProxyInputStream and ProxyOutputStream. I now have a use case 
 where I need similar functionality also for a Writer, so I've implemented the 
 same feature also for ProxyReader and ProxyWriter. I'll attach the patch for 
 review before committing it.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (IO-242) Pre- and post-processing support for ProxyReader/Writer

2010-04-14 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12856975#action_12856975
 ] 

Jukka Zitting commented on IO-242:
--

Good point. I added null protection in revision 934035.

 Pre- and post-processing support for ProxyReader/Writer
 ---

 Key: IO-242
 URL: https://issues.apache.org/jira/browse/IO-242
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 2.0

 Attachments: IO-242.patch


 In IO-211 we added protected before/after methods for all read and write 
 operations in ProxyInputStream and ProxyOutputStream. I now have a use case 
 where I need similar functionality also for a Writer, so I've implemented the 
 same feature also for ProxyReader and ProxyWriter. I'll attach the patch for 
 review before committing it.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (IO-211) Pre- and post-processing support for proxied streams

2010-04-14 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12856981#action_12856981
 ] 

Jukka Zitting commented on IO-211:
--

As Niall noted in IO-242, there's a corner case where the added code can throw 
a NullPointerException. I fixed this in revision 934041.

 Pre- and post-processing support for proxied streams
 

 Key: IO-211
 URL: https://issues.apache.org/jira/browse/IO-211
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 2.0

 Attachments: IO-211.patch


 In many cases a stream decorator needs to add custom pre- or post-processing 
 functionality to all the read and write methods in decorated input and output 
 streams. For example the CountingInputStream needs to override all three 
 read() methods with similar code.
 It would be nice if the proxy stream classes provided simple hooks for adding 
 such functionality.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (COMPRESS-18) JarArchiveEntry does not populate manifestAttributes or certificates

2010-04-14 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12856985#action_12856985
 ] 

Jukka Zitting commented on COMPRESS-18:
---

Ah, I see. Thanks!

 JarArchiveEntry does not populate manifestAttributes or certificates
 

 Key: COMPRESS-18
 URL: https://issues.apache.org/jira/browse/COMPRESS-18
 Project: Commons Compress
  Issue Type: Bug
Reporter: Sebb
Priority: Minor
 Fix For: 1.1

 Attachments: compress-18.patch


 JarArchiveEntry does not populate manifestAttributes or certificates - they 
 are both always null.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (IO-242) Pre- and post-processing support for ProxyReader/Writer

2010-03-19 Thread Jukka Zitting (JIRA)
Pre- and post-processing support for ProxyReader/Writer
---

 Key: IO-242
 URL: https://issues.apache.org/jira/browse/IO-242
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 2.0


In IO-211 we added protected before/after methods for all read and write 
operations in ProxyInputStream and ProxyOutputStream. I now have a use case 
where I need similar functionality also for a Writer, so I've implemented the 
same feature also for ProxyReader and ProxyWriter. I'll attach the patch for 
review before committing it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-242) Pre- and post-processing support for ProxyReader/Writer

2010-03-19 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-242:
-

Attachment: IO-242.patch

Proposed patch.

 Pre- and post-processing support for ProxyReader/Writer
 ---

 Key: IO-242
 URL: https://issues.apache.org/jira/browse/IO-242
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 2.0

 Attachments: IO-242.patch


 In IO-211 we added protected before/after methods for all read and write 
 operations in ProxyInputStream and ProxyOutputStream. I now have a use case 
 where I need similar functionality also for a Writer, so I've implemented the 
 same feature also for ProxyReader and ProxyWriter. I'll attach the patch for 
 review before committing it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (IO-203) Add skipFully() method for InputStreams

2010-03-08 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting reopened IO-203:
--


I don't think the skipFully() method works as intended the way it's currently 
implemented. As said in the InputStream.skip() javadocs: The skip method may, 
for a variety of reasons, end up skipping over some smaller number of bytes, 
possibly 0. Thus the skipFully() method should always fall back to read() when 
the skip() method returns something less than the number of bytes requested.

As an added complexity, note that a FileInputStream allows skipping any number 
of bytes past the end of the file! If we want to detect that case, the 
skipFully() method should first skip() n-1 bytes and then try to read() all the 
remaining bytes.

 Add skipFully() method for InputStreams
 ---

 Key: IO-203
 URL: https://issues.apache.org/jira/browse/IO-203
 Project: Commons IO
  Issue Type: New Feature
  Components: Utilities
Reporter: Sebb
 Fix For: 2.0


 The skip() method is not guaranteed to skip the requested number of bytes, 
 even if there is more data available. This is particularly true of Buffered 
 input streams.
 It would be useful to have a skip() method that keeps skipping until the 
 required number of bytes have been read, or EOF was reached, in which case it 
 should throw an Exception.
 [I'll add a patch later.]

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COMPRESS-93) Support for alternative ZIP compression methods

2010-02-19 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-93?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12835708#action_12835708
 ] 

Jukka Zitting commented on COMPRESS-93:
---

I'll only update Tika once we have a new release of Commons Compress, so it's 
OK to adjust the implementation.

The canRead() method is also good (I like how it also covers encryption), and I 
can easily adjust the TIKA-346 patch to use it instead. I guess it would be 
good to remove the isSupportedCompressionMethod() method now to avoid polluting 
the public API with multiple methods for pretty much the same purpose.

 Support for alternative ZIP compression methods
 ---

 Key: COMPRESS-93
 URL: https://issues.apache.org/jira/browse/COMPRESS-93
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting

 As reported in TIKA-346, a ZIP file that uses a compression method other than 
 STORED (0) or DEFLATED (8) will cause an IllegalArgumentException (invalid 
 compression method) to be thrown.
 It would be great if commons-compress supported alternative compression 
 methods or at least degraded more gracefully when encountering them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COMPRESS-89) Better support for encrypted ZIP files

2010-02-19 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-89:
--

Attachment: ArchiveInputStream-canRead.patch

How about making this more general and moving the canRead method up to the 
ArchiveInputStream base class? See the attached 
ArchiveInputStream-canRead.patch for an example. This would allow Tika to avoid 
casting the ArchiveInputStream instances it uses down to ZipArchiveInputStream, 
and would potentially enable other archive formats to expose similar 
information as we now do with zip.

 Better support for encrypted ZIP files
 --

 Key: COMPRESS-89
 URL: https://issues.apache.org/jira/browse/COMPRESS-89
 Project: Commons Compress
  Issue Type: Improvement
Affects Versions: 1.0, 1.1
Reporter: Antoni Mylka
Assignee: Stefan Bodewig
 Fix For: 1.1

 Attachments: apache-maven-2.2.1-encrypted-passhello.zip, 
 ArchiveInputStream-canRead.patch, commons-compress-encrypted.patch


 Currently when the ZipArchiveInputStream or ZipFile encounters an encrypted 
 zip it bails out with cryptic exceptions like: 'invalid block type'. I would 
 like to have two things:
 1. an 'encrypted' flag in the ZipArchiveEntry class. It would be taken from 
 the first bit of the 'general purpose flag'
 2. more descriptive error messages, both in ZipFile and ZipArchiveInputStream
 It might be useful in case someone wants to implement proper support for 
 encrypted zips, with methods to supply passwords/encryption keys and proper 
 encryption/decryption algorithms.
 For the time being I just need to know if a file is encrypted or not. 
 I will post a patch with a proposal of a solution in near future.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COMPRESS-93) Support for alternative ZIP compression methods

2010-02-19 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-93?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12835726#action_12835726
 ] 

Jukka Zitting commented on COMPRESS-93:
---

Brilliant, thanks!

 Support for alternative ZIP compression methods
 ---

 Key: COMPRESS-93
 URL: https://issues.apache.org/jira/browse/COMPRESS-93
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting

 As reported in TIKA-346, a ZIP file that uses a compression method other than 
 STORED (0) or DEFLATED (8) will cause an IllegalArgumentException (invalid 
 compression method) to be thrown.
 It would be great if commons-compress supported alternative compression 
 methods or at least degraded more gracefully when encountering them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COMPRESS-93) Support for alternative ZIP compression methods

2009-12-13 Thread Jukka Zitting (JIRA)
Support for alternative ZIP compression methods
---

 Key: COMPRESS-93
 URL: https://issues.apache.org/jira/browse/COMPRESS-93
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting


As reported in TIKA-346, a ZIP file that uses a compression method other than 
STORED (0) or DEFLATED (8) will cause an IllegalArgumentException (invalid 
compression method) to be thrown.

It would be great if commons-compress supported alternative compression methods 
or at least degraded more gracefully when encountering them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COMPRESS-93) Support for alternative ZIP compression methods

2009-12-13 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-93?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12789924#action_12789924
 ] 

Jukka Zitting commented on COMPRESS-93:
---

In revision 890088 I made the code degrade more gracefully by only throwing an 
IOException with a more descriptive message when such an unsupported zip entry 
is actually being read or written. I also added a 
ZipArchiveEntry.isSupportedCompressionMethod() method that downstream clients 
can use to explicitly skip such entries and thus avoid the exception.

 Support for alternative ZIP compression methods
 ---

 Key: COMPRESS-93
 URL: https://issues.apache.org/jira/browse/COMPRESS-93
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting

 As reported in TIKA-346, a ZIP file that uses a compression method other than 
 STORED (0) or DEFLATED (8) will cause an IllegalArgumentException (invalid 
 compression method) to be thrown.
 It would be great if commons-compress supported alternative compression 
 methods or at least degraded more gracefully when encountering them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (IO-211) Pre- and post-processing support for proxied streams

2009-08-17 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved IO-211.
--

   Resolution: Fixed
Fix Version/s: 2.0
 Assignee: Jukka Zitting

Committed the proposed patch in revision 805151.

 Pre- and post-processing support for proxied streams
 

 Key: IO-211
 URL: https://issues.apache.org/jira/browse/IO-211
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 2.0

 Attachments: IO-211.patch


 In many cases a stream decorator needs to add custom pre- or post-processing 
 functionality to all the read and write methods in decorated input and output 
 streams. For example the CountingInputStream needs to override all three 
 read() methods with similar code.
 It would be nice if the proxy stream classes provided simple hooks for adding 
 such functionality.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (IO-212) Incorrect ProxyInputStream.skip() javadoc

2009-08-17 Thread Jukka Zitting (JIRA)
Incorrect ProxyInputStream.skip() javadoc
-

 Key: IO-212
 URL: https://issues.apache.org/jira/browse/IO-212
 Project: Commons IO
  Issue Type: Bug
  Components: Streams/Writers
Affects Versions: 1.4
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Trivial
 Fix For: 2.0


The ProxyInputStream.skip() method documents the return value as the number of 
bytes to skipped or -1 if the end of stream when the underlying 
InputStream.skip() method returns the actual number of bytes skipped, i.e. 
never -1.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (IO-212) Incorrect ProxyInputStream.skip() javadoc

2009-08-17 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved IO-212.
--

Resolution: Fixed

Fixed in revision 805156.

 Incorrect ProxyInputStream.skip() javadoc
 -

 Key: IO-212
 URL: https://issues.apache.org/jira/browse/IO-212
 Project: Commons IO
  Issue Type: Bug
  Components: Streams/Writers
Affects Versions: 1.4
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Trivial
 Fix For: 2.0


 The ProxyInputStream.skip() method documents the return value as the number 
 of bytes to skipped or -1 if the end of stream when the underlying 
 InputStream.skip() method returns the actual number of bytes skipped, i.e. 
 never -1.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-180) LineIterator documentation

2009-08-17 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-180:
-

Fix Version/s: (was: 1.5)
   2.0

 LineIterator documentation 
 ---

 Key: IO-180
 URL: https://issues.apache.org/jira/browse/IO-180
 Project: Commons IO
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Michael Ernst
Priority: Minor
 Fix For: 2.0

   Original Estimate: 0.05h
  Remaining Estimate: 0.05h

 In the Javadoc for rg.apache.commons.io.LineIterator (in Commons IO 1.4),
 this code snippet is incorrect:  the last instance of iterator should be
 it.
   LineIterator it = FileUtils.lineIterator(file, UTF-8);
try {
  while (it.hasNext()) {
String line = it.nextLine();
/// do something with line
  }
} finally {
  LineIterator.closeQuietly(iterator);
}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-193) Broken input and output streams

2009-08-17 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-193:
-

Fix Version/s: (was: 1.5)
   2.0

 Broken input and output streams
 ---

 Key: IO-193
 URL: https://issues.apache.org/jira/browse/IO-193
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 2.0

 Attachments: IO-193.patch


 When testing error handling in code that uses streams one needs a way to 
 simulate an IOException being thrown by a stream. Typically this means using 
 a custom stream class that throws the desired exception. To avoid having to 
 implement such custom classes over and over again for multiple projects, I'd 
 like to introduce such classes in Commons IO.
 The proposed BrokenInputStream and BrokenOutputStream always throw a given 
 IOException from all InputStream and OutputStream methods that declare such 
 exceptions.
 For example, the following fictional test code:
 {code}
 Result result = processStream(new InputStream() {
 public int read() throws IOException {
 throw new IOException(test);
 }
 });
 assertEquals(PROCESSING_FAILED, result);
 {code}
 could be replaced with:
 {code}
 Result result = processStream(new BrokenInputStream());
 assertEquals(PROCESSING_FAILED, result);
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (IO-211) Pre- and post-processing support for proxied streams

2009-08-15 Thread Jukka Zitting (JIRA)
Pre- and post-processing support for proxied streams


 Key: IO-211
 URL: https://issues.apache.org/jira/browse/IO-211
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor


In many cases a stream decorator needs to add custom pre- or post-processing 
functionality to all the read and write methods in decorated input and output 
streams. For example the CountingInputStream needs to override all three read() 
methods with similar code.

It would be nice if the proxy stream classes provided simple hooks for adding 
such functionality.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-211) Pre- and post-processing support for proxied streams

2009-08-15 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-211:
-

Attachment: IO-211.patch

Attached a proposed patch that adds such protected hook methods and adapts some 
of the existing stream decorator classes to use those hooks. The implementation 
is quite similar to that of the handleIOException() method we added already 
earlier.

This change adds the overhead of two method calls to each read and write 
method, but I don't see that as a problem as any performance-sensitive client 
will use the reasonably sized buffers so the extra methods are only called once 
every n bytes read or written.


 Pre- and post-processing support for proxied streams
 

 Key: IO-211
 URL: https://issues.apache.org/jira/browse/IO-211
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor
 Attachments: IO-211.patch


 In many cases a stream decorator needs to add custom pre- or post-processing 
 functionality to all the read and write methods in decorated input and output 
 streams. For example the CountingInputStream needs to override all three 
 read() methods with similar code.
 It would be nice if the proxy stream classes provided simple hooks for adding 
 such functionality.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (IO-192) Tagged input and output streams

2009-08-11 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved IO-192.
--

   Resolution: Fixed
Fix Version/s: (was: 1.5)
   2.0

As suggested and discussed, I added static check methods to the 
TaggedIOException class and made the tags Serializable in revision 803310.

Each tagged stream uses a random UUID as an exception tag that is guaranteed 
(in all practical cases) to remain unique to the originating stream even if the 
exception is serialized and passed to another JVM.

Resolving as fixed for Commons IO 2.0. I updated the @since tags in the source 
to refer to IO 2.0 as it seems like we're not planning a 1.5 release anymore.

 Tagged input and output streams
 ---

 Key: IO-192
 URL: https://issues.apache.org/jira/browse/IO-192
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 2.0

 Attachments: IO-192-tagged-stream-changes.patch, IO-192.patch


 I'd like to introduce two new proxy streams, TaggedInputStream and 
 TaggedOutputStream, that tag all exceptions thrown by the proxied streams. 
 The goal is to make it easier to detect the source of an IOException when 
 you're dealing with multiple different streams. For example:
 {code}
 InputStream input = ...;
 OutputStream output = ...;
 TaggedOutputStream proxy = new TaggedOutputStream(output);
 try {
 IOUtils.copy(input, proxy);
 } catch (IOException e) {
 if (proxy.isTagged(e)) {
 // Could not write to the output stream
 // Perhaps we can handle that error somehow (retry, cancel?)
 e.initCause(); // gives the original exception from the proxied stream
 } else {
 // Could not read from the input stream, nothing we can do
 throw e;
 }
 }
 {code}
 I'm working on a patch to implement such a feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-207) Race condition in forceMkdir

2009-08-03 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-207:
-

  Component/s: Utilities
Fix Version/s: (was: 1.4)

 Race condition in forceMkdir
 

 Key: IO-207
 URL: https://issues.apache.org/jira/browse/IO-207
 Project: Commons IO
  Issue Type: Bug
  Components: Utilities
Affects Versions: 1.4
 Environment: Windows
Reporter: Luke Quinane
Priority: Minor
 Attachments: forceMkdir.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 If two processes or threads call forceMkdir() with the same directory there 
 is a chance that one will throw an IOException even though a directory was 
 correctly created (by the other process or thread). 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-71) [io] PipedUtils

2009-08-03 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12738366#action_12738366
 ] 

Jukka Zitting commented on IO-71:
-

On further thought I think it's inevitable that the extra thread is needed for 
a truly robust solution. I'll take another look at the patches and see if I can 
merge the two approaches.

 [io] PipedUtils
 ---

 Key: IO-71
 URL: https://issues.apache.org/jira/browse/IO-71
 Project: Commons IO
  Issue Type: Improvement
  Components: Utilities
 Environment: Operating System: All
 Platform: All
Reporter: David Smiley
Priority: Minor
 Fix For: 2.x

 Attachments: PipedUtils.zip, ReverseFilterOutputStream.patch


 I developed some nifty code that takes an OutputStream and sort  of  reverses 
 it as if it were an 
 InputStream.  Error passing and  handling  close is dealt with.  It needs 
 another thread to do the  work 
 which  runs in parallel.  It uses Piped streams.  I created  this because I  
 had to conform 
 GZIPOutputStream to my framework  which demanded an  InputStream.
 See URL to source.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COMPRESS-78) Filename suffix mappings for bzip2

2009-06-15 Thread Jukka Zitting (JIRA)
Filename suffix mappings for bzip2
--

 Key: COMPRESS-78
 URL: https://issues.apache.org/jira/browse/COMPRESS-78
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting
Priority: Minor
 Fix For: 1.1


In COMPRESS-68 we added support for common gzip filename suffixes. I'd like to 
do the same for bzip2.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COMPRESS-78) Filename suffix mappings for bzip2

2009-06-15 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-78?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-78:
--

Attachment: BZip2Utils.patch

Attached a patch with a proposed BZip2Utils class modeled after the existing 
GzipUtils class.

 Filename suffix mappings for bzip2
 --

 Key: COMPRESS-78
 URL: https://issues.apache.org/jira/browse/COMPRESS-78
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting
Priority: Minor
 Fix For: 1.1

 Attachments: BZip2Utils.patch


 In COMPRESS-68 we added support for common gzip filename suffixes. I'd like 
 to do the same for bzip2.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COMPRESS-72) Move acknowledgements from NOTICE to README

2009-05-21 Thread Jukka Zitting (JIRA)
Move acknowledgements from NOTICE to README
---

 Key: COMPRESS-72
 URL: https://issues.apache.org/jira/browse/COMPRESS-72
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Jukka Zitting
Priority: Minor


The NOTICE.txt file in commons-compress contains the following entries:

Original BZip2 classes contributed by Keiron Liddle
kei...@aftexsw.com, Aftex Software to the Apache Ant project

Original Tar classes from contributors of the Apache Ant project

Original Zip classes from contributors of the Apache Ant project

Original CPIO classes contributed by Markus Kuss and the jRPM project
(jrpm.sourceforge.net)

It's good that we acknowledge contributions, but having those entries in the 
NOTICE file is not appropriate unless the licensing of the original 
contributions explicitly required such attribution notices.

I suggest that we move these entries to a README.txt file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COMPRESS-72) Move acknowledgements from NOTICE to README

2009-05-21 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-72?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-72:
--

Description: 
The NOTICE.txt file in commons-compress contains the following entries:

{noformat}
Original BZip2 classes contributed by Keiron Liddle
kei...@aftexsw.com, Aftex Software to the Apache Ant project

Original Tar classes from contributors of the Apache Ant project

Original Zip classes from contributors of the Apache Ant project

Original CPIO classes contributed by Markus Kuss and the jRPM project
(jrpm.sourceforge.net)
{noformat}

It's good that we acknowledge contributions, but having those entries in the 
NOTICE file is not appropriate unless the licensing of the original 
contributions explicitly required such attribution notices.

I suggest that we move these entries to a README.txt file.

  was:
The NOTICE.txt file in commons-compress contains the following entries:

Original BZip2 classes contributed by Keiron Liddle
kei...@aftexsw.com, Aftex Software to the Apache Ant project

Original Tar classes from contributors of the Apache Ant project

Original Zip classes from contributors of the Apache Ant project

Original CPIO classes contributed by Markus Kuss and the jRPM project
(jrpm.sourceforge.net)

It's good that we acknowledge contributions, but having those entries in the 
NOTICE file is not appropriate unless the licensing of the original 
contributions explicitly required such attribution notices.

I suggest that we move these entries to a README.txt file.


 Move acknowledgements from NOTICE to README
 ---

 Key: COMPRESS-72
 URL: https://issues.apache.org/jira/browse/COMPRESS-72
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Jukka Zitting
Priority: Minor

 The NOTICE.txt file in commons-compress contains the following entries:
 {noformat}
 Original BZip2 classes contributed by Keiron Liddle
 kei...@aftexsw.com, Aftex Software to the Apache Ant project
 Original Tar classes from contributors of the Apache Ant project
 Original Zip classes from contributors of the Apache Ant project
 Original CPIO classes contributed by Markus Kuss and the jRPM project
 (jrpm.sourceforge.net)
 {noformat}
 It's good that we acknowledge contributions, but having those entries in the 
 NOTICE file is not appropriate unless the licensing of the original 
 contributions explicitly required such attribution notices.
 I suggest that we move these entries to a README.txt file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COMPRESS-72) Move acknowledgements from NOTICE to README

2009-05-21 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-72?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12711556#action_12711556
 ] 

Jukka Zitting commented on COMPRESS-72:
---

Do we have trails for the BZip2 and CPIO contributions?

 Move acknowledgements from NOTICE to README
 ---

 Key: COMPRESS-72
 URL: https://issues.apache.org/jira/browse/COMPRESS-72
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Jukka Zitting
Priority: Minor

 The NOTICE.txt file in commons-compress contains the following entries:
 Original BZip2 classes contributed by Keiron Liddle
 kei...@aftexsw.com, Aftex Software to the Apache Ant project
 Original Tar classes from contributors of the Apache Ant project
 Original Zip classes from contributors of the Apache Ant project
 Original CPIO classes contributed by Markus Kuss and the jRPM project
 (jrpm.sourceforge.net)
 It's good that we acknowledge contributions, but having those entries in the 
 NOTICE file is not appropriate unless the licensing of the original 
 contributions explicitly required such attribution notices.
 I suggest that we move these entries to a README.txt file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COMPRESS-68) Filename suffix mappings for compression formats

2009-04-02 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-68?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-68:
--

Description: 
There are many file name suffix conventions like .tgz for gzipped .tar files 
and .svgz for gzipped .svg files. It would be useful if Commons Compress knew 
about these conventions and provided tools to help client applications to use 
these conventions.

For example in Apache Tika we currently have the following custom code to 
deduce the original filename from a gzipped file:

{code}
if (name.endsWith(.tgz)) {
name = name.substring(0, name.length() - 4) + .tar;
} else if (name.endsWith(.gz) || name.endsWith(-gz)) {
name = name.substring(0, name.length() - 3);
} else if (name.toLowerCase().endsWith(.svgz)) {
name = name.substring(0, name.length() - 1);
} else if (name.toLowerCase().endsWith(.wmz)) {
name = name.substring(0, name.length() - 1) + f;
} else if (name.toLowerCase().endsWith(.emz)) {
name = name.substring(0, name.length() - 1) + f;
}
{code}

It would be nice if we instead could do something like this:

{code}
name = GzipUtils.getGunzipFilename(name);
{code}



  was:
There are many file name suffix conventions like .tgz for gzipped .tar files 
and .svgz for gzipped .svg files. It would be useful if Commons Compress knew 
about these conventions and provided tools to help client applications to use 
these conventions.

For example in Apache Tika we currently have the following custom code to 
deduce the original filename from a gzipped file:

{code}
if (name.endsWith(.tgz)) {
name = name.substring(0, name.length() - 4) + .tar;
} else if (name.endsWith(.gz) || name.endsWith(-gz)) {
name = name.substring(0, name.length() - 3);
} else if (name.toLowerCase().endsWith(.svgz)) {
name = name.substring(0, name.length() - 1);
} else if (name.toLowerCase().endsWith(.wmz)) {
name = name.substring(0, name.length() - 1) + f;
} else if (name.toLowerCase().endsWith(.emz)) {
name = name.substring(0, name.length() - 1) + f;
}
{code}

It would be nice if we instead could do something like this:

{code}
name = GzipUtils.getGunzipFilename(name);
{code}




 Filename suffix mappings for compression formats
 

 Key: COMPRESS-68
 URL: https://issues.apache.org/jira/browse/COMPRESS-68
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting
Priority: Minor

 There are many file name suffix conventions like .tgz for gzipped .tar files 
 and .svgz for gzipped .svg files. It would be useful if Commons Compress knew 
 about these conventions and provided tools to help client applications to use 
 these conventions.
 For example in Apache Tika we currently have the following custom code to 
 deduce the original filename from a gzipped file:
 {code}
 if (name.endsWith(.tgz)) {
 name = name.substring(0, name.length() - 4) + .tar;
 } else if (name.endsWith(.gz) || name.endsWith(-gz)) {
 name = name.substring(0, name.length() - 3);
 } else if (name.toLowerCase().endsWith(.svgz)) {
 name = name.substring(0, name.length() - 1);
 } else if (name.toLowerCase().endsWith(.wmz)) {
 name = name.substring(0, name.length() - 1) + f;
 } else if (name.toLowerCase().endsWith(.emz)) {
 name = name.substring(0, name.length() - 1) + f;
 }
 {code}
 It would be nice if we instead could do something like this:
 {code}
 name = GzipUtils.getGunzipFilename(name);
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COMPRESS-68) Filename suffix mappings for compression formats

2009-04-02 Thread Jukka Zitting (JIRA)
Filename suffix mappings for compression formats


 Key: COMPRESS-68
 URL: https://issues.apache.org/jira/browse/COMPRESS-68
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting
Priority: Minor


There are many file name suffix conventions like .tgz for gzipped .tar files 
and .svgz for gzipped .svg files. It would be useful if Commons Compress knew 
about these conventions and provided tools to help client applications to use 
these conventions.

For example in Apache Tika we currently have the following custom code to 
deduce the original filename from a gzipped file:

{code}
if (name.endsWith(.tgz)) {
name = name.substring(0, name.length() - 4) + .tar;
} else if (name.endsWith(.gz) || name.endsWith(-gz)) {
name = name.substring(0, name.length() - 3);
} else if (name.toLowerCase().endsWith(.svgz)) {
name = name.substring(0, name.length() - 1);
} else if (name.toLowerCase().endsWith(.wmz)) {
name = name.substring(0, name.length() - 1) + f;
} else if (name.toLowerCase().endsWith(.emz)) {
name = name.substring(0, name.length() - 1) + f;
}
{code}

It would be nice if we instead could do something like this:

{code}
name = GzipUtils.getGunzipFilename(name);
{code}



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COMPRESS-68) Filename suffix mappings for compression formats

2009-04-02 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-68?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated COMPRESS-68:
--

Attachment: GzipUtils.patch

Added a patch for such utility code in a new GzipUtils class.

 Filename suffix mappings for compression formats
 

 Key: COMPRESS-68
 URL: https://issues.apache.org/jira/browse/COMPRESS-68
 Project: Commons Compress
  Issue Type: New Feature
Reporter: Jukka Zitting
Priority: Minor
 Attachments: GzipUtils.patch


 There are many file name suffix conventions like .tgz for gzipped .tar files 
 and .svgz for gzipped .svg files. It would be useful if Commons Compress knew 
 about these conventions and provided tools to help client applications to use 
 these conventions.
 For example in Apache Tika we currently have the following custom code to 
 deduce the original filename from a gzipped file:
 {code}
 if (name.endsWith(.tgz)) {
 name = name.substring(0, name.length() - 4) + .tar;
 } else if (name.endsWith(.gz) || name.endsWith(-gz)) {
 name = name.substring(0, name.length() - 3);
 } else if (name.toLowerCase().endsWith(.svgz)) {
 name = name.substring(0, name.length() - 1);
 } else if (name.toLowerCase().endsWith(.wmz)) {
 name = name.substring(0, name.length() - 1) + f;
 } else if (name.toLowerCase().endsWith(.emz)) {
 name = name.substring(0, name.length() - 1) + f;
 }
 {code}
 It would be nice if we instead could do something like this:
 {code}
 name = GzipUtils.getGunzipFilename(name);
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COMPRESS-66) Document null return value of ArchiveInputStream.getNextEntry

2009-03-30 Thread Jukka Zitting (JIRA)
Document null return value of ArchiveInputStream.getNextEntry
-

 Key: COMPRESS-66
 URL: https://issues.apache.org/jira/browse/COMPRESS-66
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Jukka Zitting
Priority: Trivial


The ArchiveInputStream.getNextEntry method should mention that the return value 
will be null when there are no more entries in the archive stream.

{noformat}
Index: 
src/main/java/org/apache/commons/compress/archivers/ArchiveInputStream.java
===
--- src/main/java/org/apache/commons/compress/archivers/ArchiveInputStream.java 
(revision 760154)
+++ src/main/java/org/apache/commons/compress/archivers/ArchiveInputStream.java 
(working copy)
@@ -43,8 +43,10 @@
 private static final int BYTE_MASK = 0xFF;

 /**
- * Returns the next Archive Entry in this Stream.
- * @return the next entry
+ * Returns the next Archive Entry in this Stream.
+ *
+ * @return the next entry,
+ * or codenull/code if there are no more entries
  * @throws IOException if the next entry could not be read
  */
 public abstract ArchiveEntry getNextEntry() throws IOException;
{noformat}



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (COMPRESS-66) Document null return value of ArchiveInputStream.getNextEntry

2009-03-30 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-66?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved COMPRESS-66.
---

   Resolution: Fixed
Fix Version/s: 1.0
 Assignee: Jukka Zitting

I took the liberty of committing this javadoc fix in revision 760170.

 Document null return value of ArchiveInputStream.getNextEntry
 -

 Key: COMPRESS-66
 URL: https://issues.apache.org/jira/browse/COMPRESS-66
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Trivial
 Fix For: 1.0


 The ArchiveInputStream.getNextEntry method should mention that the return 
 value will be null when there are no more entries in the archive stream.
 {noformat}
 Index: 
 src/main/java/org/apache/commons/compress/archivers/ArchiveInputStream.java
 ===
 --- 
 src/main/java/org/apache/commons/compress/archivers/ArchiveInputStream.java 
 (revision 760154)
 +++ 
 src/main/java/org/apache/commons/compress/archivers/ArchiveInputStream.java 
 (working copy)
 @@ -43,8 +43,10 @@
  private static final int BYTE_MASK = 0xFF;
  /**
 - * Returns the next Archive Entry in this Stream.
 - * @return the next entry
 + * Returns the next Archive Entry in this Stream.
 + *
 + * @return the next entry,
 + * or codenull/code if there are no more entries
   * @throws IOException if the next entry could not be read
   */
  public abstract ArchiveEntry getNextEntry() throws IOException;
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COMPRESS-53) Remove src/main/resources

2009-03-26 Thread Jukka Zitting (JIRA)
Remove src/main/resources
-

 Key: COMPRESS-53
 URL: https://issues.apache.org/jira/browse/COMPRESS-53
 Project: Commons Compress
  Issue Type: Improvement
Reporter: Jukka Zitting
Priority: Minor


The src/main/resources directory currently contains a copy of the bla.zip file 
from src/test/resources. The test file most likely should not be included in 
the resulting commons-compress jar file. In fact it doesn't since the 
commons-parent:11 parent POM overrides the default resources settings, but it's 
still confusing to have the test file under src/main.

I would simply remove the entire src/main/resources directory as we're not 
using it for anything and the current Maven settings won't use the directory in 
any case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SANDBOX-286) BZip2CompressorInputStream doesn't work if wrapped into InputStreamReader

2009-02-24 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/SANDBOX-286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12676226#action_12676226
 ] 

Jukka Zitting commented on SANDBOX-286:
---

What do you mean by doesn't work? Are you unable to read anything from the 
InputStreamReader that wraps the bzip2 stream?

A test case that illustrates the problem would be helpful.

 BZip2CompressorInputStream doesn't work if wrapped into InputStreamReader
 -

 Key: SANDBOX-286
 URL: https://issues.apache.org/jira/browse/SANDBOX-286
 Project: Commons Sandbox
  Issue Type: Bug
  Components: Compress
Affects Versions: Nightly Builds
 Environment: Unix
Reporter: Ingo Rockel

 The BZip2CompressorInputStream doesn't work if wrapped into InputStreamReader 
 because it doesn't implement public int available() from InputStream.
 Adding the following method to BZip2CompressorInputStream fixes the problem:
 public int available() throws IOException {
 return(in.available());
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SANDBOX-286) BZip2CompressorInputStream doesn't work if wrapped into InputStreamReader

2009-02-24 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/SANDBOX-286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12676230#action_12676230
 ] 

Jukka Zitting commented on SANDBOX-286:
---

I don't see why you need to call the ready() method, especially since 
readLine() may block regardless of what ready() says. Why not the following?

{code}
String testString = reader.readLine();
while (testString != null) {
processLine(testString);
testString = reader.readLine();
}
{code}

It would be nice to support non-blocking reads from compressed streams, but 
that's more of a new feature than a bug.

 BZip2CompressorInputStream doesn't work if wrapped into InputStreamReader
 -

 Key: SANDBOX-286
 URL: https://issues.apache.org/jira/browse/SANDBOX-286
 Project: Commons Sandbox
  Issue Type: Bug
  Components: Compress
Affects Versions: Nightly Builds
 Environment: Unix
Reporter: Ingo Rockel

 The BZip2CompressorInputStream doesn't work if wrapped into InputStreamReader 
 because it doesn't implement public int available() from InputStream.
 Adding the following method to BZip2CompressorInputStream fixes the problem:
 public int available() throws IOException {
 return(in.available());
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-196) Occasional FileSystemObserver test failures

2009-02-17 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12674251#action_12674251
 ] 

Jukka Zitting commented on IO-196:
--

Also the following test fails occasionally:

FilesystemObserverTestCase
  testFileDelete :
 junit.framework.AssertionFailedError
 junit.framework.AssertionFailedError: E[0 0 0 0 0 1]: No. of directories 
changed expected:1 but was:0
   at junit.framework.Assert.fail(Assert.java:47)
   at junit.framework.Assert.failNotEquals(Assert.java:282)
   at junit.framework.Assert.assertEquals(Assert.java:64)
   at junit.framework.Assert.assertEquals(Assert.java:201)
   at 
org.apache.commons.io.monitor.FilesystemObserverTestCase.checkCollectionSizes(FilesystemObserverTestCase.java:424)
   at 
org.apache.commons.io.monitor.FilesystemObserverTestCase.testFileDelete(FilesystemObserverTestCase.java:324)

 Occasional FileSystemObserver test failures
 ---

 Key: IO-196
 URL: https://issues.apache.org/jira/browse/IO-196
 Project: Commons IO
  Issue Type: Bug
Reporter: Jukka Zitting
Priority: Minor

 The FilesystemObserverTestCase method testFileCreate() fails occasionally in 
 the Continuum build at 
 http://vmbuild.apache.org/continuum/projectView.action?projectId=155. The 
 failure, when it happens, is:
 FilesystemObserverTestCase
   testFileCreate :
  junit.framework.AssertionFailedError
  junit.framework.AssertionFailedError: E[0 0 0 1 0 0]: No. of directories 
 changed expected:1 but was:0
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:282)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:201)
at 
 org.apache.commons.io.monitor.FilesystemObserverTestCase.checkCollectionSizes(FilesystemObserverTestCase.java:424)
at 
 org.apache.commons.io.monitor.FilesystemObserverTestCase.testFileCreate(FilesystemObserverTestCase.java:203)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (IO-196) Occasional FileSystemObserver test failures

2009-02-14 Thread Jukka Zitting (JIRA)
Occasional FileSystemObserver test failures
---

 Key: IO-196
 URL: https://issues.apache.org/jira/browse/IO-196
 Project: Commons IO
  Issue Type: Bug
Reporter: Jukka Zitting
Priority: Minor


The FilesystemObserverTestCase method testFileCreate() fails occasionally in 
the Continuum build at 
http://vmbuild.apache.org/continuum/projectView.action?projectId=155. The 
failure, when it happens, is:

FilesystemObserverTestCase
  testFileCreate :
 junit.framework.AssertionFailedError
 junit.framework.AssertionFailedError: E[0 0 0 1 0 0]: No. of directories 
changed expected:1 but was:0
   at junit.framework.Assert.fail(Assert.java:47)
   at junit.framework.Assert.failNotEquals(Assert.java:282)
   at junit.framework.Assert.assertEquals(Assert.java:64)
   at junit.framework.Assert.assertEquals(Assert.java:201)
   at 
org.apache.commons.io.monitor.FilesystemObserverTestCase.checkCollectionSizes(FilesystemObserverTestCase.java:424)
   at 
org.apache.commons.io.monitor.FilesystemObserverTestCase.testFileCreate(FilesystemObserverTestCase.java:203)



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-125) wrong groupId value in pom.xml

2009-02-14 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12673495#action_12673495
 ] 

Jukka Zitting commented on IO-125:
--

{quote}
Properly moving to org.apache.commons means a bunch of redirects being put in 
the maven repository.
{quote}

Do we need those redirects? I think the 1.x releases could well remain at their 
current location at commons-io, and we'd just put new 2.x releases in 
org/apache/commons. An upgrade from 1.4 to 2.0 would require changing the 
dependency setting from

{code:xml}
dependency
  groupIdcommons-io/groupId
  artifactIdcommons-io/artifactId
  version1.4/version
/dependency
{code}

to

{code:xml}
dependency
  groupIdorg.apache.commons/groupId
  artifactIdcommons-io/artifactId
  version2.0/version
/dependency
{code}



 wrong groupId value in pom.xml
 --

 Key: IO-125
 URL: https://issues.apache.org/jira/browse/IO-125
 Project: Commons IO
  Issue Type: Bug
Reporter: Piotr Czerwinski
 Fix For: 2.x


 groupId for the project is set to commons-io. I believe it should be 
 org.apache.commons.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-46) [io] Find file in classpath

2009-02-14 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-46?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12673515#action_12673515
 ] 

Jukka Zitting commented on IO-46:
-

{quote}
For example to place some input files on the test class path to fetch from when 
invoking File-based APIs.
{quote}

That seems a bit fragile, as the resources could also be contained in a jar 
file included in the classpath. The only case I can see when this is not a 
potential issue is when the application is in control of the classpath, in 
which case it could just as well access the files directly instead of going 
through the class loader.

I'm also not so eager to introduce methods that make it easier to modify 
resources on the classpath...

Perhaps a better alternative would be a method that takes a classpath resource 
and returns a temporary file that contains the same data. This would (at some 
performance cost) satisfy the requirements of File-based APIs without worrying 
about the complexities of class loading.

 [io] Find file in classpath
 ---

 Key: IO-46
 URL: https://issues.apache.org/jira/browse/IO-46
 Project: Commons IO
  Issue Type: Improvement
  Components: Utilities
Affects Versions: 1.1
 Environment: Operating System: other
 Platform: Other
Reporter: David Leal
Priority: Minor
 Fix For: 2.x


 Just to suggest adding a method like this: 
  public File findFileInClasspath(String fileName) throws 
 FileNotFoundException 
 {
 URL url = getClass().getClassLoader().getResource(fileName);
 if (url == null){
 throw new FileNotFoundException(fileName);
 }
 return new File(url.getFile());
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-192) Tagged input and output streams

2009-02-11 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12672616#action_12672616
 ] 

Jukka Zitting commented on IO-192:
--

I like how you're extending the functionality to the Reader and Writer classes.

{quote}
1) Its a useful feature to be able to handle exceptions - not just in this 
use-case for tagging, but generally so IMO it would be good to move the 
exception handling into the Proxy stream implementations. We could provide a 
protected handleException(IOException) method that by default just re-throws 
the exception to keep compatibility, but a allows people to override for their 
own custom exception handling.
{quote}

Good idea.

My only issue is with the return value in the handleException() method of 
ProxyInputStream and ProxyReader. For example the skip() method should never 
return -1 but there is no way (apart from parsing the stack trace) for 
handleException() to know which method invoked it and what return value would 
be appropriate. I'd rather have the handleException() method return nothing, 
and just add fixed return -1 or return 0 statements where needed. A 
subclass that needs to modify the return value based on a thrown exception 
should override the specific method with custom processing.

{quote}
2) Exceptions are Serializable and many stream implementations are not so I 
have some concern about holding a reference to the stream in the 
TaggedIOException. Also this could cause references to the stream being held 
longer than previously by the application and prevent/delay garbage collection. 
An alternative could be to store the identity hash code of the tag object 
instead.
{quote}

Good point, though I'm not so sure about using the identity hash for this. For 
most (all?) JVMs it will be unique to the tag object (at least as long as the 
object lives), but there's no guarantee that this actually is the case. Perhaps 
the tagged proxy classes should have a private final Object tag = new 
Object(); tag object for this purpose. This would make the related IOUtils 
methods more complex, but see below for more on that.

{quote}
3) The current solution requires users to reference the concrete tagged stream 
implementations. While this is OK in your simple example within a single method 
its not good practice generally and will either encourage people to pollute 
their API with these tagged streams or require additional casting.
{quote}

I don't see a use case where you'd need to use casts or pollute APIs with these 
classes.

{quote}
I suggest we move the code for handling these streams into IOUtils - which also 
makes it more generic and available to re-use for other tagging requirements, 
not just by the throwing stream.
{quote}

I would rather put such static generic methods directly on the 
TaggedIOException class. This would make it easier to reuse just that class.

In any case I would keep the current isCauseOf() and throwIfCauseOf() methods 
on the tagged stream classes, as IMHO the instance method call is clearer than 
a static IOUtils (or TaggedIOException) method call.


 Tagged input and output streams
 ---

 Key: IO-192
 URL: https://issues.apache.org/jira/browse/IO-192
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 1.5

 Attachments: IO-192-tagged-stream-changes.patch, IO-192.patch


 I'd like to introduce two new proxy streams, TaggedInputStream and 
 TaggedOutputStream, that tag all exceptions thrown by the proxied streams. 
 The goal is to make it easier to detect the source of an IOException when 
 you're dealing with multiple different streams. For example:
 {code}
 InputStream input = ...;
 OutputStream output = ...;
 TaggedOutputStream proxy = new TaggedOutputStream(output);
 try {
 IOUtils.copy(input, proxy);
 } catch (IOException e) {
 if (proxy.isTagged(e)) {
 // Could not write to the output stream
 // Perhaps we can handle that error somehow (retry, cancel?)
 e.initCause(); // gives the original exception from the proxied stream
 } else {
 // Could not read from the input stream, nothing we can do
 throw e;
 }
 }
 {code}
 I'm working on a patch to implement such a feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (IO-193) Broken input and output streams

2009-02-06 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved IO-193.
--

Resolution: Fixed
  Assignee: Jukka Zitting

Patch applied in revision 741531.

 Broken input and output streams
 ---

 Key: IO-193
 URL: https://issues.apache.org/jira/browse/IO-193
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 1.5

 Attachments: IO-193.patch


 When testing error handling in code that uses streams one needs a way to 
 simulate an IOException being thrown by a stream. Typically this means using 
 a custom stream class that throws the desired exception. To avoid having to 
 implement such custom classes over and over again for multiple projects, I'd 
 like to introduce such classes in Commons IO.
 The proposed BrokenInputStream and BrokenOutputStream always throw a given 
 IOException from all InputStream and OutputStream methods that declare such 
 exceptions.
 For example, the following fictional test code:
 {code}
 Result result = processStream(new InputStream() {
 public int read() throws IOException {
 throw new IOException(test);
 }
 });
 assertEquals(PROCESSING_FAILED, result);
 {code}
 could be replaced with:
 {code}
 Result result = processStream(new BrokenInputStream());
 assertEquals(PROCESSING_FAILED, result);
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (IO-192) Tagged input and output streams

2009-02-06 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved IO-192.
--

   Resolution: Fixed
Fix Version/s: 1.5
 Assignee: Jukka Zitting

I committed a much improved version of the code in revision 741562.

 Tagged input and output streams
 ---

 Key: IO-192
 URL: https://issues.apache.org/jira/browse/IO-192
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 1.5

 Attachments: IO-192.patch


 I'd like to introduce two new proxy streams, TaggedInputStream and 
 TaggedOutputStream, that tag all exceptions thrown by the proxied streams. 
 The goal is to make it easier to detect the source of an IOException when 
 you're dealing with multiple different streams. For example:
 {code}
 InputStream input = ...;
 OutputStream output = ...;
 TaggedOutputStream proxy = new TaggedOutputStream(output);
 try {
 IOUtils.copy(input, proxy);
 } catch (IOException e) {
 if (proxy.isTagged(e)) {
 // Could not write to the output stream
 // Perhaps we can handle that error somehow (retry, cancel?)
 e.initCause(); // gives the original exception from the proxied stream
 } else {
 // Could not read from the input stream, nothing we can do
 throw e;
 }
 }
 {code}
 I'm working on a patch to implement such a feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-192) Tagged input and output streams

2009-02-05 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-192:
-

Attachment: IO-192.patch

This came up on the Tika mailing list, so I'm attaching the current state of 
the patch I have. It still needs tests and more javadocs, but the basic 
functionality should already be in place.

 Tagged input and output streams
 ---

 Key: IO-192
 URL: https://issues.apache.org/jira/browse/IO-192
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor
 Attachments: IO-192.patch


 I'd like to introduce two new proxy streams, TaggedInputStream and 
 TaggedOutputStream, that tag all exceptions thrown by the proxied streams. 
 The goal is to make it easier to detect the source of an IOException when 
 you're dealing with multiple different streams. For example:
 {code}
 InputStream input = ...;
 OutputStream output = ...;
 TaggedOutputStream proxy = new TaggedOutputStream(output);
 try {
 IOUtils.copy(input, proxy);
 } catch (IOException e) {
 if (proxy.isTagged(e)) {
 // Could not write to the output stream
 // Perhaps we can handle that error somehow (retry, cancel?)
 e.initCause(); // gives the original exception from the proxied stream
 } else {
 // Could not read from the input stream, nothing we can do
 throw e;
 }
 }
 {code}
 I'm working on a patch to implement such a feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (IO-193) Broken input and output streams

2009-02-05 Thread Jukka Zitting (JIRA)
Broken input and output streams
---

 Key: IO-193
 URL: https://issues.apache.org/jira/browse/IO-193
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor
 Fix For: 1.5


When testing error handling in code that uses streams one needs a way to 
simulate an IOException being thrown by a stream. Typically this means using a 
custom stream class that throws the desired exception. To avoid having to 
implement such custom classes over and over again for multiple projects, I'd 
like to introduce such classes in Commons IO.

The proposed BrokenInputStream and BrokenOutputStream always throw a given 
IOException from all InputStream and OutputStream methods that declare such 
exceptions.

For example, the following fictional test code:

{code}
Result result = processStream(new InputStream() {
public int read() throws IOException {
throw new IOException(test);
}
});
assertEquals(PROCESSING_FAILED, result);
{code}

could be replaced with:

{code}
Result result = processStream(new BrokenInputStream());
assertEquals(PROCESSING_FAILED, result);
{code}



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-193) Broken input and output streams

2009-02-05 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-193:
-

Attachment: IO-193.patch

Proposed patch.

 Broken input and output streams
 ---

 Key: IO-193
 URL: https://issues.apache.org/jira/browse/IO-193
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor
 Fix For: 1.5

 Attachments: IO-193.patch


 When testing error handling in code that uses streams one needs a way to 
 simulate an IOException being thrown by a stream. Typically this means using 
 a custom stream class that throws the desired exception. To avoid having to 
 implement such custom classes over and over again for multiple projects, I'd 
 like to introduce such classes in Commons IO.
 The proposed BrokenInputStream and BrokenOutputStream always throw a given 
 IOException from all InputStream and OutputStream methods that declare such 
 exceptions.
 For example, the following fictional test code:
 {code}
 Result result = processStream(new InputStream() {
 public int read() throws IOException {
 throw new IOException(test);
 }
 });
 assertEquals(PROCESSING_FAILED, result);
 {code}
 could be replaced with:
 {code}
 Result result = processStream(new BrokenInputStream());
 assertEquals(PROCESSING_FAILED, result);
 {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (IO-192) Tagged input and output streams

2009-01-26 Thread Jukka Zitting (JIRA)
Tagged input and output streams
---

 Key: IO-192
 URL: https://issues.apache.org/jira/browse/IO-192
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor


I'd like to introduce two new proxy streams, TaggedInputStream and 
TaggedOutputStream, that tag all exceptions thrown by the proxied streams. The 
goal is to make it easier to detect the source of an IOException when you're 
dealing with multiple different streams. For example:

{code}
InputStream input = ...;
OutputStream output = ...;
TaggedOutputStream proxy = new TaggedOutputStream(output);
try {
IOUtils.copy(input, proxy);
} catch (IOException e) {
if (proxy.isTagged(e)) {
// Could not write to the output stream
// Perhaps we can handle that error somehow (retry, cancel?)
e.initCause(); // gives the original exception from the proxied stream
} else {
// Could not read from the input stream, nothing we can do
throw e;
}
}
{code}

I'm working on a patch to implement such a feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-191) Possible improvements using static analysis.

2009-01-22 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12666136#action_12666136
 ] 

Jukka Zitting commented on IO-191:
--

Replying to Peter.

{quote}
I don't imagine multiple patches are easier to apply.
{quote}

As noted above, we probably don't want to apply all the changes in the patch. 
Having smaller patches would make it easier to selectively apply the changes 
you propose.

{quote}
 Changing single character string literals to character literals in string 
 concatenations.
This is a fair comment. If you feel its worth reviewing on a case by case basis 
I am happy to do this.
{quote}

I don't think it's worth the effort, but perhaps I'm missing some really 
convincing reason why we should do this.

{quote}
 why are some parts of the expression strings and other characters.
Is this a question you would like me to answer or you are just raising this as 
a hypothetical question someone might ask.
{quote}

Just a hypothetical question that a future developer that looks at the code 
might think about.


 Possible improvements using static analysis.
 

 Key: IO-191
 URL: https://issues.apache.org/jira/browse/IO-191
 Project: Commons IO
  Issue Type: Improvement
Reporter: Peter Lawrey
Priority: Trivial
 Attachments: commons-io-static-analysis.patch

   Original Estimate: 3h
  Remaining Estimate: 3h



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-191) Possible improvements using static analysis.

2009-01-22 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12666135#action_12666135
 ] 

Jukka Zitting commented on IO-191:
--

Responding to Sebb's comment first.

{quote}
If one thread creates a new collection whilst another is iterating it, then in 
the absence of synchronisation there is no guarantee what state the other 
thread will next see for the collection.
{quote}

Not true. For example, consider the following pseudocode where two threads, A 
and B, concurrently access the same collection.

{code}
A: iterator = collection.iterator();
A: iterator.next();
B: collection = new Collection();
A: iterator.next();
{code}

A continues to see the contents of the original collection while iterating, 
which IMHO is the only reasonable and deterministic behaviour in such cases. If 
thread B use collection.clear(), thread A would likely fail with a 
ConcurrentModificationException.

However, my objection applies only to cases where the collection is immutable 
after initialization (otherwise the threads would in any case need to worry 
about synchronization). In AndFileFilter this is not the case, so there I think 
using Collection.clear() is actually a valid option. This context is not 
visible in the patch, so I'd rather consider such changes on a case-by-case 
basis instead of as a part of a bigger changeset generated by an analyzer tool.

 Possible improvements using static analysis.
 

 Key: IO-191
 URL: https://issues.apache.org/jira/browse/IO-191
 Project: Commons IO
  Issue Type: Improvement
Reporter: Peter Lawrey
Priority: Trivial
 Attachments: commons-io-static-analysis.patch

   Original Estimate: 3h
  Remaining Estimate: 3h



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-191) Possible improvements using static analysis.

2009-01-22 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12666159#action_12666159
 ] 

Jukka Zitting commented on IO-191:
--

Sebb, you're right, the collection field should be volatile. My assumption (not 
true for AndFileFilter) was that the collection itself is immutable, so partial 
updates would not have been a problem.

Anyway, my objection on grounds of thread safety is moot since the 
AndFileFilter is not (and probably does not need to be) thread-safe. So making 
the field final and using Collection.clear() is fine by me.

 Possible improvements using static analysis.
 

 Key: IO-191
 URL: https://issues.apache.org/jira/browse/IO-191
 Project: Commons IO
  Issue Type: Improvement
Reporter: Peter Lawrey
Priority: Trivial
 Attachments: commons-io-static-analysis.patch

   Original Estimate: 3h
  Remaining Estimate: 3h



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-191) Possible improvements using static analysis.

2009-01-21 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12665995#action_12665995
 ] 

Jukka Zitting commented on IO-191:
--

It would be easier to review and apply this patch if it was broken down to 
pieces based on the different types of changes.

See below for a list of the changes I'd rather not apply. Other changes seem 
reasonable enough, though it's debatable whether changing working code for no 
functional reason is wise as there's always the chance of accidentally 
introducing an error. Note that the use of foreach loops needs to wait until we 
switch to Java 5.

 Changing single character string literals to character literals in string 
 concatenations.

The benefit is insignificant and the drawback is added conceptual complexity 
(why are some parts of the expression strings and other characters). Also, in 
expressions where other parts are variables, there is no syntactical hint that 
it's a string concatenation expression instead of an integer sum.

 Introducing an initial size constant to collection constructors where the 
 expected size is known.

The benefit is in most cases insignificant and the drawback is the introduction 
of magic numbers in the code. Note that in specific cases this might give 
real-world performance or memory improvements, but those cases are better 
covered in separate issues with more detailed analysis.

 Clearing an existing collection instead of replacing it with a newly 
 allocated one.

Again, the benefit is typically insignificant, but as a drawback an immutable 
collection may become mutable. What if some other code is still concurrently 
iterating the collection? Perhaps the static analyzer has taken this into 
account, but will a future programmer that wants to modify the class?


 Possible improvements using static analysis.
 

 Key: IO-191
 URL: https://issues.apache.org/jira/browse/IO-191
 Project: Commons IO
  Issue Type: Improvement
Reporter: Peter Lawrey
Priority: Trivial
 Attachments: commons-io-static-analysis.patch

   Original Estimate: 3h
  Remaining Estimate: 3h



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Moved: (NET-250) DefaultFTPFileEntryParserFactory Does not work with Netware FTP server returning NETWARE TYPE: L8

2009-01-21 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/NET-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting moved IO-188 to NET-250:
--

Fix Version/s: (was: 2.0)
  Key: NET-250  (was: IO-188)
  Project: Commons Net  (was: Commons IO)

 DefaultFTPFileEntryParserFactory Does not work with Netware FTP server 
 returning NETWARE TYPE: L8
 ---

 Key: NET-250
 URL: https://issues.apache.org/jira/browse/NET-250
 Project: Commons Net
  Issue Type: Bug
Reporter: James Hayes

 We have just being trying to upgrade from the old NetComponents-1.3.8 to the 
 new apache commons-net-2.0  The only thing we really needed to do is to 
 change some imports and our project compiled.
 The problem is that listFiles does not work any more with our netware ftp 
 server! I have done some debugging and found that the problem is when 
 creating a FTPFileEntryParser from the class DefaultFTPFileEntryParserFactory 
 it returns a Unix entry parser due the code:
 if ((ukey.indexOf(FTPClientConfig.SYST_UNIX) = 0) 
   || (ukey.indexOf(FTPClientConfig.SYST_L8) = 0))
   {
   parser = createUnixFTPEntryParser();
   }
 I understand that the SYST_L8 is used to identify that the system is unknown 
 and so per default takes the UNIX server, however our FTP server returns 
 NETWARE TYPE: L8 and should really be identified as a netware server. maybe 
 this L8 test could be done at the end of these massive if, else statements?
 In the meanwhile i have created by own FTPFileEntryParserFactory which does 
 this and it works. The question is, is it a bug and should this change also 
 be done in the commons?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (NET-250) DefaultFTPFileEntryParserFactory Does not work with Netware FTP server returning NETWARE TYPE: L8

2009-01-21 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/NET-250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated NET-250:
--

Description: 
We have just being trying to upgrade from the old NetComponents-1.3.8 to the 
new apache commons-net-2.0  The only thing we really needed to do is to change 
some imports and our project compiled.

The problem is that listFiles does not work any more with our netware ftp 
server! I have done some debugging and found that the problem is when creating 
a FTPFileEntryParser from the class DefaultFTPFileEntryParserFactory it returns 
a Unix entry parser due the code:

{code}
if ((ukey.indexOf(FTPClientConfig.SYST_UNIX) = 0) 
|| (ukey.indexOf(FTPClientConfig.SYST_L8) = 0))
{
parser = createUnixFTPEntryParser();
}
{code}

I understand that the SYST_L8 is used to identify that the system is unknown 
and so per default takes the UNIX server, however our FTP server returns 
NETWARE TYPE: L8 and should really be identified as a netware server. maybe 
this L8 test could be done at the end of these massive if, else statements?

In the meanwhile i have created by own FTPFileEntryParserFactory which does 
this and it works. The question is, is it a bug and should this change also be 
done in the commons?


  was:
We have just being trying to upgrade from the old NetComponents-1.3.8 to the 
new apache commons-net-2.0  The only thing we really needed to do is to change 
some imports and our project compiled.

The problem is that listFiles does not work any more with our netware ftp 
server! I have done some debugging and found that the problem is when creating 
a FTPFileEntryParser from the class DefaultFTPFileEntryParserFactory it returns 
a Unix entry parser due the code:

if ((ukey.indexOf(FTPClientConfig.SYST_UNIX) = 0) 
|| (ukey.indexOf(FTPClientConfig.SYST_L8) = 0))
{
parser = createUnixFTPEntryParser();
}

I understand that the SYST_L8 is used to identify that the system is unknown 
and so per default takes the UNIX server, however our FTP server returns 
NETWARE TYPE: L8 and should really be identified as a netware server. maybe 
this L8 test could be done at the end of these massive if, else statements?

In the meanwhile i have created by own FTPFileEntryParserFactory which does 
this and it works. The question is, is it a bug and should this change also be 
done in the commons?



 DefaultFTPFileEntryParserFactory Does not work with Netware FTP server 
 returning NETWARE TYPE: L8
 ---

 Key: NET-250
 URL: https://issues.apache.org/jira/browse/NET-250
 Project: Commons Net
  Issue Type: Bug
Reporter: James Hayes

 We have just being trying to upgrade from the old NetComponents-1.3.8 to the 
 new apache commons-net-2.0  The only thing we really needed to do is to 
 change some imports and our project compiled.
 The problem is that listFiles does not work any more with our netware ftp 
 server! I have done some debugging and found that the problem is when 
 creating a FTPFileEntryParser from the class DefaultFTPFileEntryParserFactory 
 it returns a Unix entry parser due the code:
 {code}
 if ((ukey.indexOf(FTPClientConfig.SYST_UNIX) = 0) 
   || (ukey.indexOf(FTPClientConfig.SYST_L8) = 0))
   {
   parser = createUnixFTPEntryParser();
   }
 {code}
 I understand that the SYST_L8 is used to identify that the system is unknown 
 and so per default takes the UNIX server, however our FTP server returns 
 NETWARE TYPE: L8 and should really be identified as a netware server. maybe 
 this L8 test could be done at the end of these massive if, else statements?
 In the meanwhile i have created by own FTPFileEntryParserFactory which does 
 this and it works. The question is, is it a bug and should this change also 
 be done in the commons?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (IO-189) update javadoc on HexDump.dump method

2009-01-21 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved IO-189.
--

   Resolution: Fixed
Fix Version/s: 1.5
 Assignee: Jukka Zitting

Updated the HexDump javadocs in revision 736507.

Please file another issue for the suggested new method signature. In general I 
think the entire HexDump class could do with some serious redesign.

 update javadoc on HexDump.dump method
 -

 Key: IO-189
 URL: https://issues.apache.org/jira/browse/IO-189
 Project: Commons IO
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: Zemian Deng
Assignee: Jukka Zitting
 Fix For: 1.5


 Please update the method parameter documentation on the offset parameter.
 Current: 
   offset - its offset, whatever that might mean
 It should change to:
   offset -its output(display) offset.
 Also please add that this method always will consume bytes until end of input 
 byte array.
 It would be nice to have a overloaded method to allow stop by length.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-184) FileUtils.isAncestor

2009-01-21 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12666023#action_12666023
 ] 

Jukka Zitting commented on IO-184:
--

What about symlinked paths?

 FileUtils.isAncestor
 

 Key: IO-184
 URL: https://issues.apache.org/jira/browse/IO-184
 Project: Commons IO
  Issue Type: Improvement
  Components: Utilities
Reporter: Yu Kobayashi
Priority: Minor
 Fix For: 2.x


 Please add FileUtils.isAncestor(). Code is following.
 public static boolean isAncestor(File file, File ancestor) {
   File f = file;
   while (f != null) {
   if (f.equals(ancestor)) return true;
   f = f.getParentFile();
   }
   return false;
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-169) FileUtils.copyFileToURL

2009-01-21 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12666026#action_12666026
 ] 

Jukka Zitting commented on IO-169:
--

How would this work in practice? Would an upload to a HTTP URL become a PUT 
request?

 FileUtils.copyFileToURL
 ---

 Key: IO-169
 URL: https://issues.apache.org/jira/browse/IO-169
 Project: Commons IO
  Issue Type: New Feature
  Components: Utilities
Affects Versions: 1.4
Reporter: Stephane Demurget
Priority: Trivial
 Fix For: 2.x


 FileUtils contains the very useful FileUtils.copyURLToFile. It would makes 
 sense to do it the other around too, or rename them download vs. upload and 
 deprecate the old one. I can provide a quick patch if needed, but this is 
 trivial.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-159) FileCleaningTracker: ReferenceQueue uses raw type

2009-01-21 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12666030#action_12666030
 ] 

Jukka Zitting commented on IO-159:
--

I looked at this briefly, and it seems like parameterizing the type actually 
uncovers an error in the code. The nested  Reaper class contains the following 
code:

{code}
Tracker tracker = null;
try {
// Wait for a tracker to remove.
tracker = (Tracker) q.remove();
} catch (Exception e) {
continue;
}
if (tracker != null) {
tracker.delete();
tracker.clear();
trackers.remove(tracker);
}
{code}

The problem is that since q is a ReferenceQueue, the q.remove() call returns a 
Reference instance and the cast will throw a ClassCastException which in turn 
prevents the tracker from being properly cleared. I don't know the 
FileCleaningTracker class well enough to know if this is a problem in practice, 
but the above code certainly doesn't do what it was written to do.

With parametrized types we can get rid of the broken cast, and the troublesome 
call becomes:

{code}
tracker = q.remove().get();
{code}



 FileCleaningTracker: ReferenceQueue uses raw type
 -

 Key: IO-159
 URL: https://issues.apache.org/jira/browse/IO-159
 Project: Commons IO
  Issue Type: Improvement
Affects Versions: 2.0
Reporter: Paul Benedict
Priority: Minor
 Fix For: 2.0


 The field is:
 ReferenceQueue /* Tracker */ q = new ReferenceQueue();
 But that inline comment needs to become the parameterized type. Is it of type 
 ? ?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-162) add Xml(Stream)Reader/Writer from ROME

2008-04-03 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12585278#action_12585278
 ] 

Jukka Zitting commented on IO-162:
--

What are the use cases for this? I understand parsing and serialization of XML 
documents, but why would you just want to convert the octet stream to a 
character stream (or vice versa)? I'm sure there are good reasons for doing 
that, I just can't come up with any of them right now.

 add Xml(Stream)Reader/Writer from ROME
 --

 Key: IO-162
 URL: https://issues.apache.org/jira/browse/IO-162
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Hervé Boutemy
 Attachments: IO-162.patch


 XmlReader is a class written by Alejandro Abdelnur in the ROME project 
 (http://rome.dev.java.net) to detect encoding from a stream containing an XML 
 document.
 It has been integrated into Maven 2.0.8, via XmlStreamReader in plexus-utils, 
 and I added XmlStreamWriter.
 commons-io seems the right library to distribute these utilities.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-158) ReaderInputStream implementation

2008-03-11 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12577457#action_12577457
 ] 

Jukka Zitting commented on IO-158:
--

Design-wise I prefer the iBatis/XMLBeans alternatives as they use an 
OutputStreamWriter instead of new String(...).getBytes(...) for translating 
characters to bytes.

Functionally they are the same, but the OutputStreamWriter approach is nicely 
analogous with the reverse stream designs we've been discussing in IO-71. A 
ReaderInputStream is simply a reversed OutputStreamWriter.

 ReaderInputStream implementation
 

 Key: IO-158
 URL: https://issues.apache.org/jira/browse/IO-158
 Project: Commons IO
  Issue Type: Wish
Reporter: Andreas Veithen
Priority: Minor

 The standard Java class InputStreamReader converts a Reader into an 
 InputStream. In some cases it is necessary to do the reverse, i.e. to convert 
 a Reader into an InputStream. Several frameworks and libraries have their own 
 implementation of this functionality (google for ReaderInputStream). Among 
 these are at least four Apache projects: Ant, iBatis, James and XMLBeans. 
 Commons IO would be a good place to share a common implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-140) IO 2.0 - Move to JDK 1.5

2008-03-03 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12574764#action_12574764
 ] 

Jukka Zitting commented on IO-140:
--

Are there other Java 5 updates we should do?

 IO 2.0 - Move to JDK 1.5
 

 Key: IO-140
 URL: https://issues.apache.org/jira/browse/IO-140
 Project: Commons IO
  Issue Type: Wish
Reporter: Niall Pemberton
 Fix For: 2.0

 Attachments: IO-2.0-deprecate-and-jdk5.patch


 I just created IO-139 for a StringBuilder Writer implementation that requies 
 JDK 1.5. So I thought I would look at the impact on IO of 1) Removing all 
 deprecations and 2) Making appropriate JDK 1.5 changes (generics, using 
 StringBuilder and new Appendable for Writers). Below is a summary, thought it 
 could be a starting point for discussion about IO 2.0
 1) DEPRECATIONS
 - CopyUtils
 - FileCleaner
 - WildcardFilter
 - FileSystemUtils freeSpace(String)
 - IOUtils toByteArray(String), toString(byte[]), toString(byte[], String) 
 2) JDK 1.5
 - ConditionalFileFilter List (and also AndFileFilter and OrFileFilter 
 implementations
 - getFileFilters() and setFileFilters() use generic ListIOFileFilter
 - Constructor for NameFileFilter, PrefixFileFilter, SuffixFileFilter, 
 WildcardFileFilter use generic ListString
 - replace StringBuffer with StringBuilder where appropriate 
 (FilenameUtils, FileSystemUtils, HexDump,IOUtils
 - FileUtils 
 - convertFileCollectionToFileArray() -- CollectionFile
 - listFiles() -- CollectionFile
 - listFiles() -- CollectionFile
 - writeStringToFile String--CharSequence (JDK 1.4+)
 - ProxyReader - add read(CharBuffer)
 - IOUtils
 - readLines(Reader) return ListString
 - toInputStream(String) -- toInputStream(CharSequence)  (JDK 1.4+)
 - write(String data, OutputStream) and write(StringBuffer data, 
 OutputStream) -- write(CharSequence data, OutputStream) 
 - write(String, Writer) and write(StringBuffer, Writer) -- 
 write(CharSequence data, Writer) 
 - LineIterator Iterator -- IteratorString
 - NullWriter - add Appendable methods
 - ProxyWriter - add Appendable methods

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-118) Change forceDelete API to return the boolean success

2008-02-19 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12570191#action_12570191
 ] 

Jukka Zitting commented on IO-118:
--

Ah, so you'd return false if the file does not exist, but throw an exception if 
it could not be deleted?

 Change forceDelete API to return the boolean success
 

 Key: IO-118
 URL: https://issues.apache.org/jira/browse/IO-118
 Project: Commons IO
  Issue Type: Improvement
Affects Versions: 1.3.1
Reporter: Henri Yandell
 Fix For: 2.x


 (Though I imagine it'll be 2.0 for API versioning, but reporting anyway).
 Would be nice to return the boolean that the delete method returns in 
 forceDelete.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-71) [io] PipedUtils

2008-02-13 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12568479#action_12568479
 ] 

Jukka Zitting commented on IO-71:
-

You're right, my solution suffers from having an unlimited buffer. Personally I 
don't think that's too much of an issue (all the really troublesome examples I 
can come up with are highly hypothetical), but I you're right that a 
thread-based solution is more robust. Too bad Java doesn't do continuations...

However, my point still stands that your underlying problem isn't about 
converting an OutputStream to an InputStream, but about using an OutputStream 
filter on an InputStream. Using a pipe is good a way to do it, but for example 
the propagation of exceptions is only relevant for filtering, not piping.

This is why I think that none of the OutputStream objects and other pipe 
constructs should really be visible to the user, and that using filter in 
naming is more appropriate than pipe.

 [io] PipedUtils
 ---

 Key: IO-71
 URL: https://issues.apache.org/jira/browse/IO-71
 Project: Commons IO
  Issue Type: Improvement
  Components: Utilities
 Environment: Operating System: All
 Platform: All
Reporter: David Smiley
Priority: Minor
 Fix For: 2.x

 Attachments: PipedUtils.zip, ReverseFilterOutputStream.patch


 I developed some nifty code that takes an OutputStream and sort  of  reverses 
 it as if it were an 
 InputStream.  Error passing and  handling  close is dealt with.  It needs 
 another thread to do the  work 
 which  runs in parallel.  It uses Piped streams.  I created  this because I  
 had to conform 
 GZIPOutputStream to my framework  which demanded an  InputStream.
 See URL to source.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-71) [io] PipedUtils

2008-02-12 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-71?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-71:


Attachment: ReverseFilterOutputStream.patch

See the attached patch (ReverseFilterInputStream.patch) for a simple draft (not 
thoroughly tested or documented) of a class that turns an OutputStream filter 
into an InputStream filter without the need for an extra thread or a pipe.

With the ReverseFilterInputStream class your example test case would become:

{code}
//starting data
InputStream original = new ByteArrayInputStream(hello 
world.getBytes(us-ascii));

// Compress
InputStream reversed = new ReverseFilterInputStream(original, 
GZIPOutputStream.class);

// Uncompress
InputStream results = new GZIPInputStream(reversed);

//show results
StringWriter swresult = new StringWriter();
CopyUtils.copy(results,swresult);

assertEquals(hello world, swresult.toString());
{code}


 [io] PipedUtils
 ---

 Key: IO-71
 URL: https://issues.apache.org/jira/browse/IO-71
 Project: Commons IO
  Issue Type: Improvement
  Components: Utilities
 Environment: Operating System: All
 Platform: All
Reporter: David Smiley
Priority: Minor
 Fix For: 2.x

 Attachments: PipedUtils.zip, ReverseFilterOutputStream.patch


 I developed some nifty code that takes an OutputStream and sort  of  reverses 
 it as if it were an 
 InputStream.  Error passing and  handling  close is dealt with.  It needs 
 another thread to do the  work 
 which  runs in parallel.  It uses Piped streams.  I created  this because I  
 had to conform 
 GZIPOutputStream to my framework  which demanded an  InputStream.
 See URL to source.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-71) [io] PipedUtils

2008-02-11 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12567562#action_12567562
 ] 

Jukka Zitting commented on IO-71:
-

I see your point about the standard pipe streams, but I'd rather solve that by 
implementing alternate versions of those classes in o.a.c.io.input and 
o.a.c.io.output, just like the improved ByteArrayOutputStream and the proxy 
stream classes do.

As for your actual PipedUtils API, it seems to me that you're rather looking 
for a generic filtering mechanism. All your public methods take an InputStream, 
apply some (piped) transformations to it, and return another InputStream for 
reading content that has gone through those transformations.

The interesting part in your solution is IMHO the way you turn an OutputStream 
filter into an InputStream filter, but I think that with some refactoring you 
could achieve the same functionality without the extra thread.

 [io] PipedUtils
 ---

 Key: IO-71
 URL: https://issues.apache.org/jira/browse/IO-71
 Project: Commons IO
  Issue Type: Improvement
  Components: Utilities
 Environment: Operating System: All
 Platform: All
Reporter: David Smiley
Priority: Minor
 Fix For: 2.x

 Attachments: PipedUtils.zip


 I developed some nifty code that takes an OutputStream and sort  of  reverses 
 it as if it were an 
 InputStream.  Error passing and  handling  close is dealt with.  It needs 
 another thread to do the  work 
 which  runs in parallel.  It uses Piped streams.  I created  this because I  
 had to conform 
 GZIPOutputStream to my framework  which demanded an  InputStream.
 See URL to source.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-119) Convenience Builder for creating complex FileFilter conditions

2008-02-10 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12567451#action_12567451
 ] 

Jukka Zitting commented on IO-119:
--

Yes, FileFilterUtils covers the Factory pattern, but it doesn't really reduce 
the required typing (or more importantly for those with a modern IDE, the 
amount of characters on a line). For example, the only benefit of 
{{FileFilterUtils.suffixFileFilter(.java)}} (or {{suffixFileFilter(.java)}} 
with static imports) over {{new SuffixFileFilter(.java)}} is one less import 
statement.

What's your use case for adding the Builder class? Do you just want to reduce 
the amount of typing when creating complex filters, or are your incrementally 
building filters based on user input or some parsed filter description?

 Convenience Builder for creating complex FileFilter conditions
 

 Key: IO-119
 URL: https://issues.apache.org/jira/browse/IO-119
 Project: Commons IO
  Issue Type: Improvement
  Components: Filters
Affects Versions: 1.3.1
Reporter: Niall Pemberton
Assignee: Niall Pemberton
Priority: Minor
 Fix For: 2.x

 Attachments: FileFilterBuilder.java, FileFilterBuilderTestCase.java


 I'd like to add a new convenience builder class (FileFilterBuilder) to make 
 it easier to create complex FileFilter using Commons IO's IOFileFilter 
 implementations.
 Heres an example of how it can be used to create a IOFileFilter for the 
 following conditions:
  - Either, directories which are not hidden and not named .svn
  - or, files which have a suffix of .java
 IOFileFilter filter = FileFilterBuilder.orBuilder()
 .and().isDirectory().isHidden(false).not().name(.svn).end()
 .and().isFile().suffix(.java).end()
 .getFileFilter();

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-71) [io] PipedUtils

2008-02-08 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-71?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12567126#action_12567126
 ] 

Jukka Zitting commented on IO-71:
-

How is this different from the piped input and output streams in java.io?

 [io] PipedUtils
 ---

 Key: IO-71
 URL: https://issues.apache.org/jira/browse/IO-71
 Project: Commons IO
  Issue Type: Improvement
  Components: Utilities
 Environment: Operating System: All
 Platform: All
Reporter: David Smiley
Priority: Minor
 Fix For: 2.x

 Attachments: PipedUtils.zip


 I developed some nifty code that takes an OutputStream and sort  of  reverses 
 it as if it were an 
 InputStream.  Error passing and  handling  close is dealt with.  It needs 
 another thread to do the  work 
 which  runs in parallel.  It uses Piped streams.  I created  this because I  
 had to conform 
 GZIPOutputStream to my framework  which demanded an  InputStream.
 See URL to source.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-46) [io] Find file in classpath

2008-02-08 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-46?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12567121#action_12567121
 ] 

Jukka Zitting commented on IO-46:
-

What's the use case for this feature?

 [io] Find file in classpath
 ---

 Key: IO-46
 URL: https://issues.apache.org/jira/browse/IO-46
 Project: Commons IO
  Issue Type: Improvement
  Components: Utilities
Affects Versions: 1.1
 Environment: Operating System: other
 Platform: Other
Reporter: David Leal
Priority: Minor
 Fix For: 2.x


 Just to suggest adding a method like this: 
  public File findFileInClasspath(String fileName) throws 
 FileNotFoundException 
 {
 URL url = getClass().getClassLoader().getResource(fileName);
 if (url == null){
 throw new FileNotFoundException(fileName);
 }
 return new File(url.getFile());
 }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (IO-152) Add ByteArrayOutputStream.write(InputStream)

2008-01-06 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved IO-152.
--

Resolution: Fixed
  Assignee: Jukka Zitting

ByteArrayOutputStream.write(InputStream) added in revision 609421.

 Add ByteArrayOutputStream.write(InputStream)
 

 Key: IO-152
 URL: https://issues.apache.org/jira/browse/IO-152
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Assignee: Jukka Zitting
Priority: Minor
 Fix For: 1.4

 Attachments: IO-152.patch


 It would be useful to have a ByteArrayOutputStream.readFrom(InputStream) 
 method to mirror the existing writeTo(OutputStream) method. A call like 
 baos.readFrom(in) would be semantically the same as IOUtils.copy(in, baos), 
 but would avoid an extra in-memory copy of the stream contents, as it could 
 read bytes from the input stream directly into the internal 
 ByteArrayOutputStream buffers.
 [update: renamed the method to write(InputStream) as discussed below]

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (IO-152) Add ByteArrayOutputStream.readFrom(InputStream)

2008-01-04 Thread Jukka Zitting (JIRA)
Add ByteArrayOutputStream.readFrom(InputStream)
---

 Key: IO-152
 URL: https://issues.apache.org/jira/browse/IO-152
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor
 Fix For: 1.4


It would be useful to have a ByteArrayOutputStream.readFrom(InputStream) method 
to mirror the existing writeTo(OutputStream) method. A call like 
baos.readFrom(in) would be semantically the same as IOUtils.copy(in, baos), but 
would avoid an extra in-memory copy of the stream contents, as it could read 
bytes from the input stream directly into the internal ByteArrayOutputStream 
buffers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-152) Add ByteArrayOutputStream.readFrom(InputStream)

2008-01-04 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-152:
-

Attachment: IO-152.patch

Attached a patch with the proposed readFrom(InputStream) method and a simple 
test case.

I'll wait a few days for any objections before committing this.

 Add ByteArrayOutputStream.readFrom(InputStream)
 ---

 Key: IO-152
 URL: https://issues.apache.org/jira/browse/IO-152
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor
 Fix For: 1.4

 Attachments: IO-152.patch


 It would be useful to have a ByteArrayOutputStream.readFrom(InputStream) 
 method to mirror the existing writeTo(OutputStream) method. A call like 
 baos.readFrom(in) would be semantically the same as IOUtils.copy(in, baos), 
 but would avoid an extra in-memory copy of the stream contents, as it could 
 read bytes from the input stream directly into the internal 
 ByteArrayOutputStream buffers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-148) IOException with constructors which take a cause

2008-01-03 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12555790#action_12555790
 ] 

Jukka Zitting commented on IO-148:
--

No worries about the svn copy, I'm not too attached on my version of the class. 
:-)

IOExceptionWithCause sounds good. With CauseIOException I was trying (clumsily, 
I admit) to keep the class name as a kind of a compound word. 
ExtendedIOException would also work, but IOExceptionWithCause is more accurate.

I'm with Gary on that a String-only constructor is not needed. In fact it might 
even be worth it to enforce that such an exception always comes with a non-null 
root cause exception. 

 IOException with constructors which take a cause
 

 Key: IO-148
 URL: https://issues.apache.org/jira/browse/IO-148
 Project: Commons IO
  Issue Type: New Feature
Reporter: Niall Pemberton
Priority: Minor
 Fix For: 1.4


 Add an IOException implementation that has constructors which take a cause 
 (see TIKA-104). Constructors which take a cause (Throwable) were not added to 
 IOException until JDK 1.6 but the initCause() method  was added to Throwable 
 in JDK 1.4.
 We should copy the Tika implementation and test case here:
 http://svn.apache.org/repos/asf/incubator/tika/trunk/src/main/java/org/apache/tika/exception/CauseIOException.java
 http://svn.apache.org/repos/asf/incubator/tika/trunk/src/test/java/org/apache/tika/exception/CauseIOExceptionTest.java

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (IO-122) Helper classes for controlling closing of streams

2008-01-03 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12555793#action_12555793
 ] 

Jukka Zitting commented on IO-122:
--

An extra thought, AutoCloseInputStream should have a finalizer that makes sure 
that the underlying stream gets closed during garbage collection.

 Helper classes for controlling closing of streams
 -

 Key: IO-122
 URL: https://issues.apache.org/jira/browse/IO-122
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor
 Fix For: 1.4

 Attachments: IO-122.patch


 Java API docs are typically not very clear on whether a component that 
 accepts an input or output stream will close the stream. This can easily lead 
 to cases where streams are either prematurely closed (which is typically easy 
 to detect) or where an unclosed stream will unnecessarily consume system 
 resources.
 The attached patch adds a set of helper classes that allow applications to 
 better control streams even when working with components that don't clearly 
 define whether they close streams or not. The added classes are:
 org.apache.commons.io.input.AutoCloseInputStream
 org.apache.commons.io.input.ClosedInputStream
 org.apache.commons.io.input.CloseShieldInputStream
 org.apache.commons.io.output.ClosedOutputStream
 org.apache.commons.io.output.CloseShieldOutputStream
 See the javadocs in the patch for more details and typical use cases. I've 
 included @since 1.4 tags in the javadocs in hope that this patch could be 
 included in the next release.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (IO-129) Add TeeInputStream

2007-10-15 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting updated IO-129:
-

Attachment: commons-io-TeeInputStream-close.patch

Not closing the output stream is designed for constructs like new 
TeeInputStream(..., System.out). I agree that making the close behaviour 
configurable is a good feature.

Instead of the auto-close feature, I'd rather make the option to close the 
associated output stream work in the close() method of the proxy stream. See 
the attached patch commons-io-TeeInputStream-close.patch for a proposed 
implementation.

One could use the AutoCloseInputStream decorator to get auto-close 
functionality:

new AutoCloseInputStream(new TeeInputStream(..., ..., true));

 Add TeeInputStream
 --

 Key: IO-129
 URL: https://issues.apache.org/jira/browse/IO-129
 Project: Commons IO
  Issue Type: New Feature
  Components: Streams/Writers
Reporter: Jukka Zitting
Priority: Minor
 Fix For: 1.4

 Attachments: commons-io-TeeInputStream-autoclose.patch, 
 commons-io-TeeInputStream-close.patch, IO-129.patch


 There should be a TeeInputStream class that transparently writes all bytes 
 read from an input stream to a given output stream. Such a class could be 
 used to easily record and log various inputs like incoming network streams, 
 etc. The class would also be nicely symmetric with the existing 
 TeeOutputStream class.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.