[jira] [Commented] (LANG-1693) "CalendarUtilsTest" fails, or not...

2022-12-07 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/LANG-1693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644363#comment-17644363
 ] 

Gary D. Gregory commented on LANG-1693:
---

IOW, https://junit-pioneer.org/docs/default-locale-timezone/

> "CalendarUtilsTest" fails, or not...
> 
>
> Key: LANG-1693
> URL: https://issues.apache.org/jira/browse/LANG-1693
> Project: Commons Lang
>  Issue Type: Test
>  Components: lang.time.*
>Affects Versions: 3.12.0
>Reporter: Gilles Sadowski
>Priority: Minor
> Attachments: OUT
>
>
> Following up a [post on the ML|https://markmail.org/message/qatstzelumanopaj].
> Running
> {noformat} 
> $ JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 mvn test
> [... skipped ...]
> [ERROR] Failures:
> [ERROR]   CalendarUtilsTest.testGetDayOfMonth:32 expected: <7> but was: <6>
> [ERROR]   CalendarUtilsTest.testGetDayOfYear:37 expected: <341> but was: <340>
> [INFO]
> [ERROR] Tests run: 7330, Failures: 2, Errors: 0, Skipped: 7
> [...]
> {noformat}
> Full console output attached.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (LANG-1693) "CalendarUtilsTest" fails, or not...

2022-12-07 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/LANG-1693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644365#comment-17644365
 ] 

Gary D. Gregory commented on LANG-1693:
---

"{*}CalendarUtilsTest:38{*} does not reset to original timezone."

That test does not set the TZ, what are you using as a reference?

https://github.com/apache/commons-lang/blob/c92aa75f35a93b6dd068b48d4287e645dc8c0f25/src/test/java/org/apache/commons/lang3/time/CalendarUtilsTest.java#L38

> "CalendarUtilsTest" fails, or not...
> 
>
> Key: LANG-1693
> URL: https://issues.apache.org/jira/browse/LANG-1693
> Project: Commons Lang
>  Issue Type: Test
>  Components: lang.time.*
>Affects Versions: 3.12.0
>Reporter: Gilles Sadowski
>Priority: Minor
> Attachments: OUT
>
>
> Following up a [post on the ML|https://markmail.org/message/qatstzelumanopaj].
> Running
> {noformat} 
> $ JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 mvn test
> [... skipped ...]
> [ERROR] Failures:
> [ERROR]   CalendarUtilsTest.testGetDayOfMonth:32 expected: <7> but was: <6>
> [ERROR]   CalendarUtilsTest.testGetDayOfYear:37 expected: <341> but was: <340>
> [INFO]
> [ERROR] Tests run: 7330, Failures: 2, Errors: 0, Skipped: 7
> [...]
> {noformat}
> Full console output attached.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (LANG-1693) "CalendarUtilsTest" fails, or not...

2022-12-07 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/LANG-1693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644378#comment-17644378
 ] 

Gary D. Gregory commented on LANG-1693:
---

Please try git master, I've pushed fixes to the tests.

> "CalendarUtilsTest" fails, or not...
> 
>
> Key: LANG-1693
> URL: https://issues.apache.org/jira/browse/LANG-1693
> Project: Commons Lang
>  Issue Type: Test
>  Components: lang.time.*
>Affects Versions: 3.12.0
>Reporter: Gilles Sadowski
>Priority: Minor
> Attachments: OUT
>
>
> Following up a [post on the ML|https://markmail.org/message/qatstzelumanopaj].
> Running
> {noformat} 
> $ JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 mvn test
> [... skipped ...]
> [ERROR] Failures:
> [ERROR]   CalendarUtilsTest.testGetDayOfMonth:32 expected: <7> but was: <6>
> [ERROR]   CalendarUtilsTest.testGetDayOfYear:37 expected: <341> but was: <340>
> [INFO]
> [ERROR] Tests run: 7330, Failures: 2, Errors: 0, Skipped: 7
> [...]
> {noformat}
> Full console output attached.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (LANG-1693) "CalendarUtilsTest" fails, or not...

2022-12-07 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/LANG-1693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved LANG-1693.
---
Fix Version/s: 3.13.0
   Resolution: Fixed

> "CalendarUtilsTest" fails, or not...
> 
>
> Key: LANG-1693
> URL: https://issues.apache.org/jira/browse/LANG-1693
> Project: Commons Lang
>  Issue Type: Test
>  Components: lang.time.*
>Affects Versions: 3.12.0
>Reporter: Gilles Sadowski
>Priority: Minor
> Fix For: 3.13.0
>
> Attachments: OUT
>
>
> Following up a [post on the ML|https://markmail.org/message/qatstzelumanopaj].
> Running
> {noformat} 
> $ JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 mvn test
> [... skipped ...]
> [ERROR] Failures:
> [ERROR]   CalendarUtilsTest.testGetDayOfMonth:32 expected: <7> but was: <6>
> [ERROR]   CalendarUtilsTest.testGetDayOfYear:37 expected: <341> but was: <340>
> [INFO]
> [ERROR] Tests run: 7330, Failures: 2, Errors: 0, Skipped: 7
> [...]
> {noformat}
> Full console output attached.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (COMPRESS-614) Use FileTime for time fields in SevenZipArchiveEntry

2022-12-07 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved COMPRESS-614.
--
Fix Version/s: 1.23
   Resolution: Fixed

> Use FileTime for time fields in SevenZipArchiveEntry
> 
>
> Key: COMPRESS-614
> URL: https://issues.apache.org/jira/browse/COMPRESS-614
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: 7zip
> Fix For: 1.23
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Instead of java.util.Date, which caps precision in milliseconds, let's move 
> on to using FileTime.
> We can keep backwards compatibility through the getters and setters for 
> modification, access and creation dates.
> If you're ok with it, I'll send a PR for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (COMPRESS-621) ZipFile does not support prepending additional data to the zip content

2022-12-07 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved COMPRESS-621.
--
Fix Version/s: 1.23
   Resolution: Fixed

> ZipFile does not support prepending additional data to the zip content
> --
>
> Key: COMPRESS-621
> URL: https://issues.apache.org/jira/browse/COMPRESS-621
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Glavo
>Priority: Major
> Fix For: 1.23
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In general, Zip files support placing arbitrary content before their body 
> without affecting their compliance.
> Here is an example:
> [https://github.com/huanghongxun/HMCL/releases/download/v3.5.2.218/HMCL-3.5.2.218.exe]
>  
> This is actually a jar file, but we prepend an exe launcher to it, so it can 
> be used both as a jar and as an exe.
> java.util.zip.ZipFile can open and read it normally, but 
> org.apache.commons.compress.archivers.zip.ZipFile can open it but cannot read 
> any entries.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (DAEMON-451) Prunsrv does not use configured stack size for the main thread in jvm mode

2022-12-07 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17644587#comment-17644587
 ] 

Gary D. Gregory commented on DAEMON-451:


Hi [~kkolinko] and all,

Thank you for your report and patch.

I wonder if the patch should guard against ridiculously large stack size 
requests.

> Prunsrv does not use configured stack size for the main thread in jvm mode
> --
>
> Key: DAEMON-451
> URL: https://issues.apache.org/jira/browse/DAEMON-451
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: Procrun
>Affects Versions: 1.3.3
>Reporter: Konstantin Kolinko
>Priority: Minor
> Attachments: 
> 0001-Fix-DAEMON-451.-Honor-the-stack-size-option-when-cre.patch
>
>
> This issue was originally reported for Apache Tomcat, see
> [https://bz.apache.org/bugzilla/show_bug.cgi?id=66327]
> A user provided a sample web application (see BZ 66327 attachment 38426). 
> This web application was deployed on Apache Tomcat 9.0.70 (uses Commons 
> Daemon 1.3.3), configured as a Windows Service.
> The sample application performs a deep recursion at startup, that prints 
> numbers from 1 up to 25000. In the default configuration it fails with a 
> StackOverflowError.
> To reproduce:
> 1. Build the sample web application:
> 1.1. Download it from BZ 66327 (attachment 38426).
> 1.2. Change its pom.xml file, removing "jersey-container-servlet" from 
> dependencies. I do not know why it is there, it is not needed.
> 1.3. Build it with Apache Maven (`mvn package`). The result is 
> target/TestStack-1.0-SNAPSHOT.war
> 2. Install Apache Tomcat 9 on Windows as a service:
> 2.1. Download and run the "32-bit/64-bit Windows Service Installer" (*.exe) 
> from
> https://tomcat.apache.org/download-90.cgi
> 2.2. Run it. Use default options, but once installation finishes choose to do 
> not start the service.
> 3. Deploy the web appplication on Tomcat:
> 3.1. Copy the war file into webapps directory of Tomcat.
> 4. Go into the bin directory of Tomcat and run prunmgr (tomcat9w.exe).
> 5. Start Tomcat service, from the first page of prunmgr dialog.
> 6. See the files in the logs directory of Tomcat.
>  - See the localhost.2022-*{*}-{*}*.log file for a StackOverflowError
>  - See the tomcat9-stdout.2022-{*}-{*}*.log file for recursion counter.
> If the web application starts successfully, there will be no 
> StackOverflowError, and the counter will go up to 25000.
> If the web application fails to start, there will be a StackOverflowError, 
> and the recursion counter will stop at some value, usually somewhere around 
> 15000.
> 7. Try to increase the stack size:
> 7.1. Stop the service. Clear the files in the logs directory.
> 7.2. In prunmgr dialog (page "Java") change "Thread stack size" to a higher 
> value, e.g. 3072.
> 7.3. Do not forget to press the "Apply" button.
> 7.4. Start the service.
> Expected behaviour:
> It is expected that changing the "Thread stack size" value to a higher value 
> would eventually allow the web application to start successfully.
> Actual behaviour:
> It makes no difference.
> If I reconfigure Tomcat to use a thread pool when starting web applications, 
> instead of using the main thread, the behaviour changes to the expected one.  
> E.g. if you add startStopThreads="3" to a Host element in its conf/server.xml 
> file. See Configuration Reference here:
> https://tomcat.apache.org/tomcat-9.0-doc/config/host.html
> Note: procrun here runs in jvm mode. The configuration for Tomcat is as 
> follows (see bin/service.bat file):
> --StartMode jvm
> --StopMode jvm
> --StartClass org.apache.catalina.startup.Bootstrap
> --StopClass org.apache.catalina.startup.Bootstrap
> --StartParams start
> --StopParams stop



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (COMPRESS-633) Adding support for SevenZ password encryption

2022-12-10 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved COMPRESS-633.
--
Fix Version/s: 1.23
   Resolution: Fixed

> Adding support for SevenZ password encryption
> -
>
> Key: COMPRESS-633
> URL: https://issues.apache.org/jira/browse/COMPRESS-633
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Affects Versions: 1.22
>Reporter: Daniel Santos
>Priority: Major
>  Labels: contributing, features
> Fix For: 1.23
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> 👉🏼 The purpose is to provide password based encryption for 7z compression in 
> the same way of decryption already supported, so go forward of [one know 
> limitation|https://commons.apache.org/proper/commons-compress/limitations.html]
> ☝️In this way, I would like to submit my contribution based on the existing 
> implementation of decryption and the [C++ implementation of 7z 
> |https://github.com/kornelski/7z/blob/main/CPP/7zip/Crypto/7zAes.cpp]
> ✅ I added one unit test
>  
> I prepared a [Pull Request on 
> GitHub|https://github.com/apache/commons-compress/pull/332]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CRYPTO-160) Package-private class JavaCryptoRandom extends Random

2022-12-12 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CRYPTO-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CRYPTO-160:
---
Summary: Package-private class JavaCryptoRandom extends Random  (was: 
JavaCryptoRandom class leaks Random implementation)

> Package-private class JavaCryptoRandom extends Random
> -
>
> Key: CRYPTO-160
> URL: https://issues.apache.org/jira/browse/CRYPTO-160
> Project: Commons Crypto
>  Issue Type: Bug
>Reporter: Adrian Anderson
>Priority: Major
> Fix For: 1.1.1
>
>
> The CryptoRandom implementation class JavaCryptoRandom extends 
> java.util.Random when they don't need to and without re-implementing the 
> "protected int next(int bits)" method. 
> The issue is that if a developer were to use the CryptoRandomFactory to 
> create a JavaCryptoRandom instance and  to Random wanting to use as a 
> replacement for code using an instance of Random in existing code the 
> implementation would fall back to the java.util.Random (inherited) 
> implementation rather than the CryptoRandom (encapsulated) implementation. 
> For example
> {{CryptoRandom cryptoRandom = CryptoRandomFactory.getCryptoRandom(); 
> //instance of JavaCryptoRandom}}
> {{Random rand = (Random)cryptoRandom;}}
> {{long randomLong = rand.nextLong(); //returns java.util.Random.nextLong(), 
> circumventing SecureRandom}}
> A simple solution would be to override the "protected int next(int bits)" 
> method within JavaCryptoRandom to invoke the SecureRandom "next(int bits)" 
> implementation. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CRYPTO-160) Package-private class JavaCryptoRandom extends Random but should not

2022-12-12 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CRYPTO-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CRYPTO-160:
---
Summary: Package-private class JavaCryptoRandom extends Random but should 
not  (was: Package-private class JavaCryptoRandom extends Random)

> Package-private class JavaCryptoRandom extends Random but should not
> 
>
> Key: CRYPTO-160
> URL: https://issues.apache.org/jira/browse/CRYPTO-160
> Project: Commons Crypto
>  Issue Type: Bug
>Reporter: Adrian Anderson
>Priority: Major
> Fix For: 1.1.1
>
>
> The CryptoRandom implementation class JavaCryptoRandom extends 
> java.util.Random when they don't need to and without re-implementing the 
> "protected int next(int bits)" method. 
> The issue is that if a developer were to use the CryptoRandomFactory to 
> create a JavaCryptoRandom instance and  to Random wanting to use as a 
> replacement for code using an instance of Random in existing code the 
> implementation would fall back to the java.util.Random (inherited) 
> implementation rather than the CryptoRandom (encapsulated) implementation. 
> For example
> {{CryptoRandom cryptoRandom = CryptoRandomFactory.getCryptoRandom(); 
> //instance of JavaCryptoRandom}}
> {{Random rand = (Random)cryptoRandom;}}
> {{long randomLong = rand.nextLong(); //returns java.util.Random.nextLong(), 
> circumventing SecureRandom}}
> A simple solution would be to override the "protected int next(int bits)" 
> method within JavaCryptoRandom to invoke the SecureRandom "next(int bits)" 
> implementation. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CRYPTO-160) Package-private class JavaCryptoRandom extends Random but should not

2022-12-12 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CRYPTO-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17646350#comment-17646350
 ] 

Gary D. Gregory commented on CRYPTO-160:


Note that while it is true that {{JavaCryptoRandom}} extends {{Random}} and 
does not need to; an app casting a {{CryptoRandom}} to a {{Random}} is IMO out 
of bounds and unsupported since it is not documented and the app has to force 
the cast. This was not the intended use case anyway. The next release will not 
extend {{Random}} and still maintain binary compatibility since the class is 
package-private.

> Package-private class JavaCryptoRandom extends Random but should not
> 
>
> Key: CRYPTO-160
> URL: https://issues.apache.org/jira/browse/CRYPTO-160
> Project: Commons Crypto
>  Issue Type: Bug
>Reporter: Adrian Anderson
>Priority: Major
> Fix For: 1.2.0
>
>
> The CryptoRandom implementation class JavaCryptoRandom extends 
> java.util.Random when they don't need to and without re-implementing the 
> "protected int next(int bits)" method. 
> The issue is that if a developer were to use the CryptoRandomFactory to 
> create a JavaCryptoRandom instance and  to Random wanting to use as a 
> replacement for code using an instance of Random in existing code the 
> implementation would fall back to the java.util.Random (inherited) 
> implementation rather than the CryptoRandom (encapsulated) implementation. 
> For example
> {{CryptoRandom cryptoRandom = CryptoRandomFactory.getCryptoRandom(); 
> //instance of JavaCryptoRandom}}
> {{Random rand = (Random)cryptoRandom;}}
> {{long randomLong = rand.nextLong(); //returns java.util.Random.nextLong(), 
> circumventing SecureRandom}}
> A simple solution would be to override the "protected int next(int bits)" 
> method within JavaCryptoRandom to invoke the SecureRandom "next(int bits)" 
> implementation. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (DAEMON-445) Changes of DAEMON-441 cause trouble with empty env vars

2022-12-17 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17648926#comment-17648926
 ] 

Gary D. Gregory commented on DAEMON-445:


[~kkolinko] 

It would be best if you could provide the patch as a.PR on GitHub. TY.

 

> Changes of DAEMON-441 cause trouble with empty env vars
> ---
>
> Key: DAEMON-445
> URL: https://issues.apache.org/jira/browse/DAEMON-445
> Project: Commons Daemon
>  Issue Type: Bug
>Affects Versions: 1.3.1
>Reporter: Alexander Fischer
>Priority: Major
> Attachments: 
> 0001-Fix-DAEMON-445.-Fix-processing-of-environment-variab.patch
>
>
> The changes done via DAEMON-441, namely 
> [https://github.com/apache/commons-daemon/commit/97b31058ecf5e4dc202188d8e8917f6caa90dcfc#diff-e7f9bbe0d9947378640c1e1c91d8dc72c93e6d1034218458158a1cc43f2f3b9fR278]
>  lead to a failing service installation in case the environment variable 
> exists but has no value. 
> In my use case the supported environment variables are explicitly set empty 
> if defaults shall be used. This is done to avoid interference with 
> environment variables which may be set by the caling process. 
> In such a case example output is:
> {noformat}
> [2022-06-29 11:32:11] [error] [10552] Error getting environment variable 
> PR_LibraryPath
> [2022-06-29 11:32:11] [warn]  [10552] Failed to grant service user '.\user' 
> write permissions to log path 'C:\Windows\system32\LogFiles\Apache' due to 
> error '19: The media is write protected.'
> {noformat}
> while the environment is set
> {noformat}
> PR_DESCRIPTION: tomcat-service description
> PR_DISPLAYNAME: tomcat-service
> PR_INSTALL: C:\apache-tomcat\bin\tomcat9.exe
> PR_SERVICEUSER: .\hidden
> PR_SERVICEPASSWORD: hidden
> PR_STARTUP: auto
> PR_CLASSPATH: 
> C:\apache-tomcat/bin/bootstrap.jar;C:\apache-tomcat/bin/tomcat-juli.jar;
> PR_JAVA_HOME: C:\jdk
> PR_JVM: C:\jdk\bin\server\jvm.dll
> PR_JVMMS: 256
> PR_JVMMX: 3072
> PR_JVMOPTIONS: 
> -Dcatalina.base=C:\tomcat-service;-Dcatalina.home=C:\apache-tomcat;-Djava.io.tmpdir=C:\tomcat-service/temp;
> PR_ENVIRONMENT: 
> PATH='C:\Windows;C:\Windows\System32;C:\Windows\System32\Wbem;C:\apache-tomcat\bin'
> PR_LIBRARYPATH: C:\apache-tomcat/bin
> PR_STARTCLASS: org.apache.catalina.startup.Bootstrap
> PR_STARTMETHOD: 
> PR_STARTPARAMS: start
> PR_STARTMODE: jvm
> PR_STARTPATH: C:\tomcat-service
> PR_STOPCLASS: org.apache.catalina.startup.Bootstrap
> PR_STOPMETHOD: 
> PR_STOPPARAMS: stop
> PR_STOPMODE: jvm
> PR_STOPPATH: C:\tomcat-service
> PR_STOPTIMEOUT: 60
> PR_LOGJNIMESSAGES: 0
> PR_LOGLEVEL: Info
> PR_LOGPATH: C:\Logs\tomcat-service
> PR_LOGPREFIX: tomcat-service
> PR_STDERROR: auto
> PR_STDOUTPUT: auto
> PR_DEPENDSON: postgresql-x64-11;
> {noformat}
> Means, the environment variables PR_STOPMETHOD, PR_LIBRARYPATH and 
> PR_STARTMETHOD are defined but empty.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (LANG-1685) [JDK17] ToStringBuilder.reflectionToString fails with InaccessibleObjectException on java.lang classes

2022-12-30 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/LANG-1685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653187#comment-17653187
 ] 

Gary D. Gregory commented on LANG-1685:
---

[~terjestrand] 

No, it would not make sense: it wouldn't even compile. 

> [JDK17] ToStringBuilder.reflectionToString fails with 
> InaccessibleObjectException on java.lang classes
> --
>
> Key: LANG-1685
> URL: https://issues.apache.org/jira/browse/LANG-1685
> Project: Commons Lang
>  Issue Type: Bug
>  Components: lang.builder.*
>Affects Versions: 3.12.0
>Reporter: David Connard
>Priority: Major
>
> JDK17 prevents reflective access to java.lang classes by default.
> The following code fails on JDK17+
> {code:java}
> System.out.println("boom = " + 
> ToStringBuilder.reflectionToString(Set.of(123))); {code}
> I understand that we can "--add-opens" (eg. as you've done for hbase builds 
> in 
> [https://github.com/jojochuang/hbase/commit/b909db7ca7c221308ad5aba1ea58317c77358b94)]
>  ... but, ideally, that should not be a standard requirement to run an 
> application that uses {{ToStringBuilder.reflectionToString()}} on JDK17+
> The following sample code appears to work for our use-case, albeit with some 
> additional spurious output on the object.  It catches the exception and just 
> dumps a raw object toString() instead.  You probably want to improve on this.
> {code:java}
> ReflectionToStringBuilder jdk17SafeToStringBuilder = new 
> ReflectionToStringBuilder(obj) {
> protected void appendFieldsIn(final Class clazz) {
> if (clazz.isArray()) {
> this.reflectionAppendArray(this.getObject());
> return;
> }
> // The elements in the returned array are not sorted and are not in 
> any particular order.
> final Field[] fields = clazz.getDeclaredFields();
> Arrays.sort(fields, Comparator.comparing(Field::getName));
> try {
> // first, check that we can delve into the fields.  With JDK17+, 
> we cannot do this by default on
> // various JDK classes
> AccessibleObject.setAccessible(fields, true);
> } catch (InaccessibleObjectException ioEx) {
> // JDK 17 - prevents access to fields.  We'll ignore this, and 
> assume these have a decent toString() and not reflect into them
> this.appendToString(Objects.toString(obj));
> return;
> }
> for (final Field field : fields) {
> final String fieldName = field.getName();
> if (this.accept(field)) {
> try {
> // Warning: Field.get(Object) creates wrappers objects
> // for primitive types.
> final Object fieldValue = this.getValue(field);
> if (!isExcludeNullValues() || fieldValue != null) {
> this.append(fieldName, fieldValue, 
> !field.isAnnotationPresent(ToStringSummary.class));
> }
> } catch (final IllegalAccessException ex) {
> //this can't happen. Would get a Security exception
> // instead
> //throw a runtime exception in case the impossible
> // happens.
> throw new InternalError("Unexpected 
> IllegalAccessException: " + ex.getMessage());
> }
> }
> }
> }
> };
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (COMPRESS-613) Write ZIP extra time fields automatically

2023-01-01 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated COMPRESS-613:
-
External issue URL: https://github.com/apache/commons-compress/pull/345

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> writeShort(elenEXTT + 1);  // flag + data
> writeByte(flagEXTT);
> if (e.mtime != null)
> writeInt(umtime);
> if (e.atime != null)
> writeInt(uatime);
> if (e.ctime != null)
> writeInt(uctime);
> }
> }
> writeExtra(e.extra);
> locoff = written;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (COMPRESS-613) Write ZIP extra time fields automatically

2023-01-01 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved COMPRESS-613.
--
Fix Version/s: 1.23
   Resolution: Fixed

[~andrebrait] 

Thank you for your PR.

Please verify git master and close.

 

> Write ZIP extra time fields automatically
> -
>
> Key: COMPRESS-613
> URL: https://issues.apache.org/jira/browse/COMPRESS-613
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.21
>Reporter: Andre Brait
>Priority: Major
>  Labels: zip
> Fix For: 1.23
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> When writing to a Zip file through ZipArchiveOutputStream, setting creation 
> and access times in a ZipArchiveEntry does not cause these to be reflected as 
> X5455 or X000A extra fields in the resulting zip file. This also happens for 
> modification times that do not fit into an MS-DOS time.
> As a consequence, the date range is reduced, as well as the granularity (form 
> 100ns intervals to seconds).
> ZipEntry and standard java.util.zip facilities do that automatically, but 
> that's missing here.
> My proposal is to use the same logic the java.util.zip do and add those extra 
> fields automatically, if situation requires them.
> See my existing logic for this here: 
> [https://github.com/andrebrait/DATROMTool/blob/86a4f4978bab250ca54d047c58b4f91e7dbbcc7f/core/src/main/java/io/github/datromtool/io/FileCopier.java#L1425]
> It's (almost) the same logic from java.util.zip, but adapted to be used with 
> ZipArchiveEntry.
> If you're ok with it, I can send a PR.
> Actual logic will be more like 
> {{{}java.util.zip.ZipOutputStream#writeLOC(XEntry){}}}, represented below:
> {code:java}
> int elenEXTT = 0; // info-zip extended timestamp
> int flagEXTT = 0;
> long umtime = -1;
> long uatime = -1;
> long uctime = -1;
> if (e.mtime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LMT;
> umtime = fileTimeToUnixTime(e.mtime);
> }
> if (e.atime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAG_LAT;
> uatime = fileTimeToUnixTime(e.atime);
> }
> if (e.ctime != null) {
> elenEXTT += 4;
> flagEXTT |= EXTT_FLAT_CT;
> uctime = fileTimeToUnixTime(e.ctime);
> }
> if (flagEXTT != 0) {
> // to use ntfs time if any m/a/ctime is beyond unixtime upper 
> bound
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> elen += 36;// NTFS time, total 36 bytes
> } else {
> elen += (elenEXTT + 5);// headid(2) + size(2) + flag(1) + 
> data
> }
> }
> writeShort(elen);
> writeBytes(nameBytes, 0, nameBytes.length);
> if (hasZip64) {
> writeShort(ZIP64_EXTID);
> writeShort(16);
> writeLong(e.size);
> writeLong(e.csize);
> }
> if (flagEXTT != 0) {
> if (umtime > UPPER_UNIXTIME_BOUND ||
> uatime > UPPER_UNIXTIME_BOUND ||
> uctime > UPPER_UNIXTIME_BOUND) {
> writeShort(EXTID_NTFS);// id
> writeShort(32);// data size
> writeInt(0);   // reserved
> writeShort(0x0001);// NTFS attr tag
> writeShort(24);
> writeLong(e.mtime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.mtime));
> writeLong(e.atime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.atime));
> writeLong(e.ctime == null ? WINDOWS_TIME_NOT_AVAILABLE
>   : fileTimeToWinTime(e.ctime));
> } else {
> writeShort(EXTID_EXTT);
> writeShort(elenEXTT + 1);  // flag + data
> writeByte(flagEXTT);
> if (e.mtime != null)
> writeInt(umtime);
> if (e.atime != null)
> writeInt(uatime);
> if (e.ctime != null)
> writeInt(uctime);
> }
> }
> writeExtra(e.extra);
> locoff = written;
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2023-01-01 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated POOL-393:
-
External issue URL: https://github.com/apache/commons-pool/pull/199

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (POOL-393) BaseGenericObjectPool.jmxRegister may cost too much time

2023-01-01 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved POOL-393.
--
Fix Version/s: 2.12.0
   Resolution: Fixed

In git master. Please verify and close.

> BaseGenericObjectPool.jmxRegister may cost too much time
> 
>
> Key: POOL-393
> URL: https://issues.apache.org/jira/browse/POOL-393
> Project: Commons Pool
>  Issue Type: Improvement
>Affects Versions: 2.4.2
>Reporter: Shichao Yuan
>Priority: Major
> Fix For: 2.12.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
>  
> When creating many pools, I find that it tasks too much time to register jmx.
> In the code,  the ObjectName's postfix always starts with 1, so many 
> InstanceAlreadyExistsExceptions may be thrown before registered successfully.
> Maybe a random number is a better choice, or a atomic long.
> {quote}private ObjectName jmxRegister(BaseObjectPoolConfig config,
>  String jmxNameBase, String jmxNamePrefix) {
>  ObjectName objectName = null;
>  MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
>  int i = 1;
>  boolean registered = false;
>  String base = config.getJmxNameBase();
>  if (base == null)
> Unknown macro: \{ base = jmxNameBase; }
> while (!registered) {
>  try {
>  ObjectName objName;
>  // Skip the numeric suffix for the first pool in case there is
>  // only one so the names are cleaner.
>  if (i == 1)
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix); }
> else
> Unknown macro: \{ objName = new ObjectName(base + jmxNamePrefix + i); }
> mbs.registerMBean(this, objName);
>  objectName = objName;
>  registered = true;
>  } catch (MalformedObjectNameException e) {
>  if (BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX.equals(
>  jmxNamePrefix) && jmxNameBase.equals(base))
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> else
> Unknown macro: \{ // Must be an invalid name. Use the defaults instead. 
> jmxNamePrefix = BaseObjectPoolConfig.DEFAULT_JMX_NAME_PREFIX; base = 
> jmxNameBase; }
> } catch (InstanceAlreadyExistsException e)
> Unknown macro: \{ // Increment the index and try again i++; }
> catch (MBeanRegistrationException e)
> Unknown macro: \{ // Shouldn't happen. Skip registration if it does. 
> registered = true; }
> catch (NotCompliantMBeanException e)
> }
>  return objectName;
>  }
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (POOL-264) NullPointerException in GKOP.borrowObject()

2023-01-02 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/POOL-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed POOL-264.

Resolution: Won't Fix

Please port to 2.x.

> NullPointerException in GKOP.borrowObject()
> ---
>
> Key: POOL-264
> URL: https://issues.apache.org/jira/browse/POOL-264
> Project: Commons Pool
>  Issue Type: Bug
>Affects Versions: 1.5.6, 1.5.7, 1.6
>Reporter: Leonid Meyerguz
>Priority: Major
>
> While I cannot pin down a consistent repro, I occasionally observe a 
> NullPointerException at 
> org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1126)
> The pool is configured as follows:
> maxActive = -1
> maxIdle = 32
> maxTotal = 32
> whenExhaustedAction = WHEN_EXHAUSTED_GROW
> timeBetweenEvictionRunsMillis = 2
> minEvictableIdleTimeMillis = 6
> numTestsPerEvictionRun = -1
> The NullPointerException is thrown in the WHEN_EXHAUSTED_GROW branch of the 
> code.  Specifically it appears that latch.getPool() returns null.
> Any suggestions for a work-around would be appreciated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CSV-141) Handle malformed CSV files

2023-01-04 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CSV-141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CSV-141:

Attachment: image-2023-01-04-14-00-57-158.png

> Handle malformed CSV files
> --
>
> Key: CSV-141
> URL: https://issues.apache.org/jira/browse/CSV-141
> Project: Commons CSV
>  Issue Type: Wish
>  Components: Parser
>Affects Versions: 1.0
>Reporter: Nguyen Minh
>Priority: Minor
> Fix For: 1.x
>
> Attachments: image-2023-01-04-14-00-57-158.png
>
>
> My java application has to handle thousands of CSV files uploaded by the 
> client phones everyday. So, there some CSV files have the wrong format which 
> I'm not sure why.
> Here is my sample CSV. Microsoft Excel parses it correctly, but both Common 
> CSV and OpenCSV can't parse it. Open CSV can't parse line 2 (due to '\' 
> character) and Common CSV will crash on line 3 and 4:
> "1414770317901","android.widget.EditText","pass sem1 _84*|*","0","pass sem1 
> _8"
> "1414770318470","android.widget.EditText","pass sem1 _84:*|*","0","pass sem1 
> _84:\"
> "1414770318327","android.widget.EditText","pass sem1 
> "1414770318628","android.widget.EditText","pass sem1 _84*|*","0","pass sem1
> Line 3: java.io.IOException: (line 5) invalid char between encapsulated token 
> and delimiter
>   at org.apache.commons.csv.CSVParser$1.getNextRecord(CSVParser.java:398)
>   at org.apache.commons.csv.CSVParser$1.hasNext(CSVParser.java:407)
> Line 4: java.io.IOException: (startline 5) EOF reached before encapsulated 
> token finished
>   at org.apache.commons.csv.CSVParser$1.getNextRecord(CSVParser.java:398)
>   at org.apache.commons.csv.CSVParser$1.hasNext(CSVParser.java:407)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CSV-141) Handle malformed CSV files

2023-01-04 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CSV-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17654602#comment-17654602
 ] 

Gary D. Gregory commented on CSV-141:
-

Note that if you use Excel to import the example in the description, you are 
asking for trouble, per Excel's warning:

!image-2023-01-04-14-00-57-158.png!

> Handle malformed CSV files
> --
>
> Key: CSV-141
> URL: https://issues.apache.org/jira/browse/CSV-141
> Project: Commons CSV
>  Issue Type: Wish
>  Components: Parser
>Affects Versions: 1.0
>Reporter: Nguyen Minh
>Priority: Minor
> Fix For: 1.x
>
> Attachments: image-2023-01-04-14-00-57-158.png
>
>
> My java application has to handle thousands of CSV files uploaded by the 
> client phones everyday. So, there some CSV files have the wrong format which 
> I'm not sure why.
> Here is my sample CSV. Microsoft Excel parses it correctly, but both Common 
> CSV and OpenCSV can't parse it. Open CSV can't parse line 2 (due to '\' 
> character) and Common CSV will crash on line 3 and 4:
> "1414770317901","android.widget.EditText","pass sem1 _84*|*","0","pass sem1 
> _8"
> "1414770318470","android.widget.EditText","pass sem1 _84:*|*","0","pass sem1 
> _84:\"
> "1414770318327","android.widget.EditText","pass sem1 
> "1414770318628","android.widget.EditText","pass sem1 _84*|*","0","pass sem1
> Line 3: java.io.IOException: (line 5) invalid char between encapsulated token 
> and delimiter
>   at org.apache.commons.csv.CSVParser$1.getNextRecord(CSVParser.java:398)
>   at org.apache.commons.csv.CSVParser$1.hasNext(CSVParser.java:407)
> Line 4: java.io.IOException: (startline 5) EOF reached before encapsulated 
> token finished
>   at org.apache.commons.csv.CSVParser$1.getNextRecord(CSVParser.java:398)
>   at org.apache.commons.csv.CSVParser$1.hasNext(CSVParser.java:407)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CSV-141) Handle malformed CSV files

2023-01-04 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CSV-141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CSV-141:

External issue URL: https://github.com/apache/commons-csv/pull/295

> Handle malformed CSV files
> --
>
> Key: CSV-141
> URL: https://issues.apache.org/jira/browse/CSV-141
> Project: Commons CSV
>  Issue Type: Wish
>  Components: Parser
>Affects Versions: 1.0
>Reporter: Nguyen Minh
>Priority: Minor
> Fix For: 1.x
>
> Attachments: image-2023-01-04-14-00-57-158.png
>
>
> My java application has to handle thousands of CSV files uploaded by the 
> client phones everyday. So, there some CSV files have the wrong format which 
> I'm not sure why.
> Here is my sample CSV. Microsoft Excel parses it correctly, but both Common 
> CSV and OpenCSV can't parse it. Open CSV can't parse line 2 (due to '\' 
> character) and Common CSV will crash on line 3 and 4:
> "1414770317901","android.widget.EditText","pass sem1 _84*|*","0","pass sem1 
> _8"
> "1414770318470","android.widget.EditText","pass sem1 _84:*|*","0","pass sem1 
> _84:\"
> "1414770318327","android.widget.EditText","pass sem1 
> "1414770318628","android.widget.EditText","pass sem1 _84*|*","0","pass sem1
> Line 3: java.io.IOException: (line 5) invalid char between encapsulated token 
> and delimiter
>   at org.apache.commons.csv.CSVParser$1.getNextRecord(CSVParser.java:398)
>   at org.apache.commons.csv.CSVParser$1.hasNext(CSVParser.java:407)
> Line 4: java.io.IOException: (startline 5) EOF reached before encapsulated 
> token finished
>   at org.apache.commons.csv.CSVParser$1.getNextRecord(CSVParser.java:398)
>   at org.apache.commons.csv.CSVParser$1.hasNext(CSVParser.java:407)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-637) changes-report.html does not reflect release 1.22

2023-01-06 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17655431#comment-17655431
 ] 

Gary D. Gregory commented on COMPRESS-637:
--

[~mattsicker] ?

> changes-report.html does not reflect release 1.22
> -
>
> Key: COMPRESS-637
> URL: https://issues.apache.org/jira/browse/COMPRESS-637
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.22
>Reporter: MrBump
>Priority: Trivial
> Fix For: 1.22
>
>
> Webpage about releases still does not reflect that Compress 1.22 has been 
> released:
> https://commons.apache.org/proper/commons-compress/changes-report.html
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NET-650) IMAPClient over proxy doesn't properly resolve DNS

2023-01-09 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/NET-650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved NET-650.
-
Fix Version/s: 3.10.0
   Resolution: Fixed

> IMAPClient over proxy doesn't properly resolve DNS
> --
>
> Key: NET-650
> URL: https://issues.apache.org/jira/browse/NET-650
> Project: Commons Net
>  Issue Type: Bug
>  Components: IMAP
>Affects Versions: 3.6
>Reporter: Matthew McGillis
>Priority: Major
> Fix For: 3.10.0
>
> Attachments: imapproxy.java, imapproxy2.java, socketproxy.java
>
>
> IMAPClient when configured to use a socks proxy is not able to resolve DNS 
> names through the proxy.
> See attached sample code, if I use it with:
> {noformat}
> $ java -DsocksProxyHost=localhost -DsocksProxyPort=16003 -cp 
> .:./commons-net-3.6.jar imapproxy imap.server.test.com user1 userpass
> connect error: java.net.UnknownHostException: imap.server.test.com: unknown 
> error
> {noformat}
> vs if I use it with the appropriate IP:
> {noformat}
> $ java -DsocksProxyHost=localhost -DsocksProxyPort=16003 -cp 
> .:./commons-net-3.6.jar imapproxy 10.250.3.127 user1 userpass
> * OK IMAP4rev1 proxy server ready
> IMAP: 10.250.3.127 143
>  LOGIN ***
>  OK [CAPABILITY IMAP4rev1 ACL BINARY CATENATE CHILDREN CONDSTORE ENABLE 
> ESEARCH ESORT I18NLEVEL=1 ID IDLE LIST-EXTENDED LIST-STATUS LITERAL+ 
> LOGIN-REFERRALS MULTIAPPEND NAMESPACE QRESYNC QUOTA RIGHTS=ektx SASL-IR 
> SEARCHRES SORT THREAD=ORDEREDSUBJECT UIDPLUS UNSELECT WITHIN XLIST] LOGIN 
> completed
> AAAB LOGOUT
> * BYE 10.250.3.127 Zimbra IMAP4rev1 server closing connection
> AAAB OK LOGOUT completed
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-637) changes-report.html does not reflect release 1.22

2023-01-09 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656267#comment-17656267
 ] 

Gary D. Gregory commented on COMPRESS-637:
--

Running 'mvn clean site' generates the whole site.

> changes-report.html does not reflect release 1.22
> -
>
> Key: COMPRESS-637
> URL: https://issues.apache.org/jira/browse/COMPRESS-637
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.22
>Reporter: MrBump
>Priority: Trivial
> Fix For: 1.22
>
>
> Webpage about releases still does not reflect that Compress 1.22 has been 
> released:
> https://commons.apache.org/proper/commons-compress/changes-report.html
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown chara

2023-01-09 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656372#comment-17656372
 ] 

Gary D. Gregory commented on COMPRESS-638:
--

You should likely check the GZip specification to see how this data is allowed 
to be encoded.

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown chara

2023-01-09 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656390#comment-17656390
 ] 

Gary D. Gregory commented on COMPRESS-638:
--

Looks like no: 

 

"If FNAME is set, an original file name is present, terminated by a zero byte. 
The name must consist of ISO 8859-1 (LATIN-1) characters; on operating systems 
using EBCDIC or any other character set for file names, the name must be 
translated to the ISO LATIN-1 character set."

>From [https://www.ietf.org/rfc/rfc1952.txt]

Do you have other sources that indicate otherwise? 

 

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown

2023-01-09 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656390#comment-17656390
 ] 

Gary D. Gregory edited comment on COMPRESS-638 at 1/10/23 3:03 AM:
---

Looks like no: 

"If FNAME is set, an original file name is present, terminated by a zero byte. 
The name must consist of ISO 8859-1 (LATIN-1) characters; on operating systems 
using EBCDIC or any other character set for file names, the name must be 
translated to the ISO LATIN-1 character set."

>From [https://www.ietf.org/rfc/rfc1952.txt]

Do you have other sources that indicate otherwise? 

 


was (Author: garydgregory):
Looks like no: 

 

"If FNAME is set, an original file name is present, terminated by a zero byte. 
The name must consist of ISO 8859-1 (LATIN-1) characters; on operating systems 
using EBCDIC or any other character set for file names, the name must be 
translated to the ISO LATIN-1 character set."

>From [https://www.ietf.org/rfc/rfc1952.txt]

Do you have other sources that indicate otherwise? 

 

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-637) changes-report.html does not reflect release 1.22

2023-01-10 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17657002#comment-17657002
 ] 

Gary D. Gregory commented on COMPRESS-637:
--

Well, I dunno how you did it ;) the page

https://commons.apache.org/proper/commons-compress/changes-report.html

says "not released, yet (Java 8)"

> changes-report.html does not reflect release 1.22
> -
>
> Key: COMPRESS-637
> URL: https://issues.apache.org/jira/browse/COMPRESS-637
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.22
>Reporter: MrBump
>Priority: Trivial
> Fix For: 1.22
>
>
> Webpage about releases still does not reflect that Compress 1.22 has been 
> released:
> https://commons.apache.org/proper/commons-compress/changes-report.html
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IO-784) Add support for Appendable to HexDump util

2023-01-10 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated IO-784:
---
External issue URL: https://github.com/apache/commons-io/pull/418

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown chara

2023-01-10 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17657058#comment-17657058
 ] 

Gary D. Gregory commented on COMPRESS-638:
--

See my previous comment.

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown chara

2023-01-10 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17657068#comment-17657068
 ] 

Gary D. Gregory commented on COMPRESS-638:
--

I guess but it would break adherence to the RFC. It would have to be pluggable 
to. Feel free to provide a PR.

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown

2023-01-10 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17657068#comment-17657068
 ] 

Gary D. Gregory edited comment on COMPRESS-638 at 1/11/23 3:46 AM:
---

I guess but it would break adherence to the RFC. It would have to be pluggable 
as well. Feel free to provide a PR.


was (Author: garydgregory):
I guess but it would break adherence to the RFC. It would have to be pluggable 
to. Feel free to provide a PR.

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (VFS-829) Fix for SFTP memory leak committed on 2022-07-12 missing from 2.9.0 release on 2022-07-16

2023-01-13 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated VFS-829:

Fix Version/s: (was: 2.10.0)

> Fix for SFTP memory leak committed on 2022-07-12 missing from 2.9.0 release 
> on 2022-07-16
> -
>
> Key: VFS-829
> URL: https://issues.apache.org/jira/browse/VFS-829
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Tobias Gierke
>Priority: Major
>
> While debugging some JVM heap dump file I noticed that it was caused by the 
> same problem I reported & provided a fix for in 
> [https://github.com/apache/commons-vfs/pull/272.]
> Those changes have already been merged into the 'master' branch on 2022-07-12
> {code:java}
> d745442a 2022-07-12 12:07:24 -0400 Gary Gregory SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem.
> bd75c2cd 2022-07-12 12:05:01 -0400 Tobias Gierke SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem fails to detect equality of SFTP 
> FileSystemOptions #272  
> 0737dc9b 2022-07-12 12:03:02 -0400 Gary Gregory SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem fails to detect equality of SFTP 
> FileSystemOptions #272 {code}
> but for some reason never made it into the 2.9.0 release on 2022-07-16
> Are there any plans to release a 2.10.0 that includes these fixes ?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-829) Fix for SFTP memory leak committed on 2022-07-12 missing from 2.9.0 release on 2022-07-16

2023-01-13 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17676627#comment-17676627
 ] 

Gary D. Gregory commented on VFS-829:
-

Would you test/check git master? Maybe the change made it in post release. 

> Fix for SFTP memory leak committed on 2022-07-12 missing from 2.9.0 release 
> on 2022-07-16
> -
>
> Key: VFS-829
> URL: https://issues.apache.org/jira/browse/VFS-829
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Tobias Gierke
>Priority: Major
>
> While debugging some JVM heap dump file I noticed that it was caused by the 
> same problem I reported & provided a fix for in 
> [https://github.com/apache/commons-vfs/pull/272.]
> Those changes have already been merged into the 'master' branch on 2022-07-12
> {code:java}
> d745442a 2022-07-12 12:07:24 -0400 Gary Gregory SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem.
> bd75c2cd 2022-07-12 12:05:01 -0400 Tobias Gierke SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem fails to detect equality of SFTP 
> FileSystemOptions #272  
> 0737dc9b 2022-07-12 12:03:02 -0400 Gary Gregory SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem fails to detect equality of SFTP 
> FileSystemOptions #272 {code}
> but for some reason never made it into the 2.9.0 release on 2022-07-16
> Are there any plans to release a 2.10.0 that includes these fixes ?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-829) Fix for SFTP memory leak committed on 2022-07-12 missing from 2.9.0 release on 2022-07-16

2023-01-13 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17676654#comment-17676654
 ] 

Gary D. Gregory commented on VFS-829:
-

So it will be in the next release. 

> Fix for SFTP memory leak committed on 2022-07-12 missing from 2.9.0 release 
> on 2022-07-16
> -
>
> Key: VFS-829
> URL: https://issues.apache.org/jira/browse/VFS-829
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Tobias Gierke
>Priority: Major
>
> While debugging some JVM heap dump file I noticed that it was caused by the 
> same problem I reported & provided a fix for in 
> [https://github.com/apache/commons-vfs/pull/272.]
> Those changes have already been merged into the 'master' branch on 2022-07-12
> {code:java}
> d745442a 2022-07-12 12:07:24 -0400 Gary Gregory SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem.
> bd75c2cd 2022-07-12 12:05:01 -0400 Tobias Gierke SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem fails to detect equality of SFTP 
> FileSystemOptions #272  
> 0737dc9b 2022-07-12 12:03:02 -0400 Gary Gregory SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem fails to detect equality of SFTP 
> FileSystemOptions #272 {code}
> but for some reason never made it into the 2.9.0 release on 2022-07-16
> Are there any plans to release a 2.10.0 that includes these fixes ?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (VFS-829) Fix for SFTP memory leak committed on 2022-07-12 missing from 2.9.0 release on 2022-07-16

2023-01-13 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved VFS-829.
-
Fix Version/s: 2.10.0
   Resolution: Fixed

> Fix for SFTP memory leak committed on 2022-07-12 missing from 2.9.0 release 
> on 2022-07-16
> -
>
> Key: VFS-829
> URL: https://issues.apache.org/jira/browse/VFS-829
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Tobias Gierke
>Priority: Major
> Fix For: 2.10.0
>
>
> While debugging some JVM heap dump file I noticed that it was caused by the 
> same problem I reported & provided a fix for in 
> [https://github.com/apache/commons-vfs/pull/272.]
> Those changes have already been merged into the 'master' branch on 2022-07-12
> {code:java}
> d745442a 2022-07-12 12:07:24 -0400 Gary Gregory SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem.
> bd75c2cd 2022-07-12 12:05:01 -0400 Tobias Gierke SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem fails to detect equality of SFTP 
> FileSystemOptions #272  
> 0737dc9b 2022-07-12 12:03:02 -0400 Gary Gregory SFTP: Memory leak because 
> AbstractFileProvider#findFileSystem fails to detect equality of SFTP 
> FileSystemOptions #272 {code}
> but for some reason never made it into the 2.9.0 release on 2022-07-16
> Are there any plans to release a 2.10.0 that includes these fixes ?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-683) Thread safety issue in VFSClassLoader - NullPointerException thrown

2023-01-13 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17676792#comment-17676792
 ] 

Gary D. Gregory commented on VFS-683:
-

Except that there are failures in builds that seem random but must be related.

> Thread safety issue in VFSClassLoader - NullPointerException thrown
> ---
>
> Key: VFS-683
> URL: https://issues.apache.org/jira/browse/VFS-683
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.2
>Reporter: Daryl Odnert
>Assignee: Gary D. Gregory
>Priority: Major
> Attachments: Main.java
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In my application, I have two instances of the {{VFSClassLoader}}, each of 
> which is being used in a distinct thread. Both {{VFSClassLoader}} instances 
> refer to the same compressed file resource described by a {{FileObject}} that 
> is passed to the class loader's constructor. Intermittently, the application 
> throws an exception with the stack trace shown below. So, there seems to be 
> either a race condition in the code or an undocumented assumption here. If it 
> is unsupported for two {{VFSClassLoader}} instances to refer to the same 
> resource (file), then that assumption should be documented. But if that is 
> not the case, then there is a race condition bug in the implementation.
> {noformat}
> 43789 WARN  {} c.a.e.u.PreferredPathClassLoader - While loading class 
> org.apache.hive.jdbc.HiveDatabaseMetaData, rethrowing unexpected 
> java.lang.NullPointerException: Inflater has been closed
> java.lang.NullPointerException: Inflater has been closed
>   at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
>   at java.util.zip.Inflater.inflate(Inflater.java:257)
>   at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at 
> org.apache.commons.vfs2.util.MonitorInputStream.read(MonitorInputStream.java:91)
>   at org.apache.commons.vfs2.FileUtil.getContent(FileUtil.java:47)
>   at org.apache.commons.vfs2.impl.Resource.getBytes(Resource.java:102)
>   at 
> org.apache.commons.vfs2.impl.VFSClassLoader.defineClass(VFSClassLoader.java:179)
>   at 
> org.apache.commons.vfs2.impl.VFSClassLoader.findClass(VFSClassLoader.java:150)
> at 
> com.atscale.engine.utils.PreferredPathClassLoader.findClass(PreferredPathClassLoader.scala:54)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VALIDATOR-486) the IBAN validator does not support all the IBAN supported countries

2023-01-16 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VALIDATOR-486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17677349#comment-17677349
 ] 

Gary D. Gregory commented on VALIDATOR-486:
---

PR 67 has been merged. There is still no support for DJ and RU.

> the IBAN validator does not support all the IBAN supported countries
> 
>
> Key: VALIDATOR-486
> URL: https://issues.apache.org/jira/browse/VALIDATOR-486
> Project: Commons Validator
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Nicola Gioia
>Priority: Major
>
> the list of supported countries for iban miss the following countries:
> |Burundi|BI|
> |Djibouti|DJ|
> |Libya|LY|
> |Russia|RU|
> |Sudan|SD|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown chara

2023-01-16 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17677398#comment-17677398
 ] 

Gary D. Gregory commented on COMPRESS-638:
--

Note: I added 
{{org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStreamTest}}.

I don't think we can do anything that will be interoperable with all of the 
gzip utilities out in the world.

Even if we convert non-ISO-8859-1-encodable file names to _something_ that _we_ 
can convert back ourselves, that will look odd elsewhere.

For example, we could encode "测试中文名称.xml"/"Test Chinese name.xml" to 
"\u6D4B\u8BD5\u4E2D\u6587\u540D\u79F0.xml"

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VALIDATOR-486) the IBAN validator does not support all the IBAN supported countries

2023-01-16 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VALIDATOR-486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17677451#comment-17677451
 ] 

Gary D. Gregory commented on VALIDATOR-486:
---

bq. By the way: what about the pull requests 
https://github.com/apache/commons-validator/pull/60 and 
https://github.com/apache/commons-validator/pull/61 ?

This ticket is about IBAN, let's not mix up topics

bq. Not for RU because of the sanctions!

You must be confused. Are you referring to a law that prevents this component 
from implementing ISO 13616?


> the IBAN validator does not support all the IBAN supported countries
> 
>
> Key: VALIDATOR-486
> URL: https://issues.apache.org/jira/browse/VALIDATOR-486
> Project: Commons Validator
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Nicola Gioia
>Priority: Major
>
> the list of supported countries for iban miss the following countries:
> |Burundi|BI|
> |Djibouti|DJ|
> |Libya|LY|
> |Russia|RU|
> |Sudan|SD|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (VALIDATOR-486) the IBAN validator does not support all the IBAN supported countries

2023-01-16 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/VALIDATOR-486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved VALIDATOR-486.
---
Fix Version/s: 1.8
   Resolution: Fixed

> the IBAN validator does not support all the IBAN supported countries
> 
>
> Key: VALIDATOR-486
> URL: https://issues.apache.org/jira/browse/VALIDATOR-486
> Project: Commons Validator
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Nicola Gioia
>Priority: Major
> Fix For: 1.8
>
>
> the list of supported countries for iban miss the following countries:
> |Burundi|BI|
> |Djibouti|DJ|
> |Libya|LY|
> |Russia|RU|
> |Sudan|SD|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IMAGING-343) Apache Commons Imaging 0.97 - CVE-2018-17202

2023-01-16 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/IMAGING-343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17677506#comment-17677506
 ] 

Gary D. Gregory edited comment on IMAGING-343 at 1/16/23 8:33 PM:
--

0.97-incubator users should upgrade to commons-imaging-1.0-alpha1 or later.



was (Author: garydgregory):
0.97-incubator users should upgrade to commons-imaging-1.0-alpha1


> Apache Commons Imaging 0.97 - CVE-2018-17202
> 
>
> Key: IMAGING-343
> URL: https://issues.apache.org/jira/browse/IMAGING-343
> Project: Commons Imaging
>  Issue Type: Bug
>Affects Versions: 0.97
>Reporter: Nikhil
>Priority: Major
>
> Certain input files could make the code to enter into an infinite loop when 
> Apache Sanselan 0.97-incubator was used to parse them, which could be used in 
> a DoS attack. Note that Apache Sanselan (incubating) was renamed to Apache 
> Commons Imaging.
>  
> See [https://nvd.nist.gov/vuln/detail/CVE-2018-17202] for more details.
>  
> There is Apache Commons Imaging 1.0-{*}alpha3{*} version available.. but we 
> are trying to understand if a new *GA* will be made available and also to see 
> if this specific CVE is addressed in the latest versions ?
>  
> Please help



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IMAGING-343) Apache Commons Imaging 0.97 - CVE-2018-17202

2023-01-16 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved IMAGING-343.
-
Fix Version/s: 1.0-alpha1
   Resolution: Fixed

> Apache Commons Imaging 0.97 - CVE-2018-17202
> 
>
> Key: IMAGING-343
> URL: https://issues.apache.org/jira/browse/IMAGING-343
> Project: Commons Imaging
>  Issue Type: Bug
>Affects Versions: 0.97
>Reporter: Nikhil
>Priority: Major
> Fix For: 1.0-alpha1
>
>
> Certain input files could make the code to enter into an infinite loop when 
> Apache Sanselan 0.97-incubator was used to parse them, which could be used in 
> a DoS attack. Note that Apache Sanselan (incubating) was renamed to Apache 
> Commons Imaging.
>  
> See [https://nvd.nist.gov/vuln/detail/CVE-2018-17202] for more details.
>  
> There is Apache Commons Imaging 1.0-{*}alpha3{*} version available.. but we 
> are trying to understand if a new *GA* will be made available and also to see 
> if this specific CVE is addressed in the latest versions ?
>  
> Please help



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IMAGING-343) Apache Commons Imaging 0.97 - CVE-2018-17202

2023-01-16 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/IMAGING-343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17677506#comment-17677506
 ] 

Gary D. Gregory commented on IMAGING-343:
-

0.97-incubator users should upgrade to commons-imaging-1.0-alpha1


> Apache Commons Imaging 0.97 - CVE-2018-17202
> 
>
> Key: IMAGING-343
> URL: https://issues.apache.org/jira/browse/IMAGING-343
> Project: Commons Imaging
>  Issue Type: Bug
>Affects Versions: 0.97
>Reporter: Nikhil
>Priority: Major
>
> Certain input files could make the code to enter into an infinite loop when 
> Apache Sanselan 0.97-incubator was used to parse them, which could be used in 
> a DoS attack. Note that Apache Sanselan (incubating) was renamed to Apache 
> Commons Imaging.
>  
> See [https://nvd.nist.gov/vuln/detail/CVE-2018-17202] for more details.
>  
> There is Apache Commons Imaging 1.0-{*}alpha3{*} version available.. but we 
> are trying to understand if a new *GA* will be made available and also to see 
> if this specific CVE is addressed in the latest versions ?
>  
> Please help



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (VFS-683) Thread safety issue in VFSClassLoader - NullPointerException thrown

2023-01-18 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved VFS-683.
-
Fix Version/s: 2.10.0
   Resolution: Fixed

> Thread safety issue in VFSClassLoader - NullPointerException thrown
> ---
>
> Key: VFS-683
> URL: https://issues.apache.org/jira/browse/VFS-683
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.2
>Reporter: Daryl Odnert
>Assignee: Gary D. Gregory
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: Main.java
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In my application, I have two instances of the {{VFSClassLoader}}, each of 
> which is being used in a distinct thread. Both {{VFSClassLoader}} instances 
> refer to the same compressed file resource described by a {{FileObject}} that 
> is passed to the class loader's constructor. Intermittently, the application 
> throws an exception with the stack trace shown below. So, there seems to be 
> either a race condition in the code or an undocumented assumption here. If it 
> is unsupported for two {{VFSClassLoader}} instances to refer to the same 
> resource (file), then that assumption should be documented. But if that is 
> not the case, then there is a race condition bug in the implementation.
> {noformat}
> 43789 WARN  {} c.a.e.u.PreferredPathClassLoader - While loading class 
> org.apache.hive.jdbc.HiveDatabaseMetaData, rethrowing unexpected 
> java.lang.NullPointerException: Inflater has been closed
> java.lang.NullPointerException: Inflater has been closed
>   at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
>   at java.util.zip.Inflater.inflate(Inflater.java:257)
>   at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at 
> org.apache.commons.vfs2.util.MonitorInputStream.read(MonitorInputStream.java:91)
>   at org.apache.commons.vfs2.FileUtil.getContent(FileUtil.java:47)
>   at org.apache.commons.vfs2.impl.Resource.getBytes(Resource.java:102)
>   at 
> org.apache.commons.vfs2.impl.VFSClassLoader.defineClass(VFSClassLoader.java:179)
>   at 
> org.apache.commons.vfs2.impl.VFSClassLoader.findClass(VFSClassLoader.java:150)
> at 
> com.atscale.engine.utils.PreferredPathClassLoader.findClass(PreferredPathClassLoader.scala:54)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IO-785) FileUtils.deleteDirectory fails to delete directory on Azure AKS

2023-01-18 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/IO-785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17678361#comment-17678361
 ] 

Gary D. Gregory commented on IO-785:


Hello [~iloncar] 
Please update to 2.11.0.

> FileUtils.deleteDirectory fails to delete directory on Azure AKS 
> -
>
> Key: IO-785
> URL: https://issues.apache.org/jira/browse/IO-785
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.9.0
> Environment: Azure Files Container Storage Interface (CSI) driver in 
> Azure Kubernetes Service (AKS)
> apiVersion: storage.k8s.io/v1
> kind: StorageClass
> metadata:
>   annotations:
> kubectl.kubernetes.io/last-applied-configuration: |
>   
> \{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"azure-aks-test-cluster-file-storage-class"},"mountOptions":["dir_mode=0777","file_mode=0777","uid=0","gid=0","mfsymlinks","cache=strict","actimeo=30"],"provisioner":"kubernetes.io/azure-file"}
> storageclass.kubernetes.io/is-default-class: "false"
>   creationTimestamp: "2022-01-01T0-00:00:00Z"
>   name: azure-aks-test-cluster-file-storage-class
>   resourceVersion: "12768518"
>   uid: bc6-invalid-8c
> mountOptions:
> - dir_mode=0777
> - file_mode=0777
> - uid=0
> - gid=0
> - mfsymlinks
> - cache=strict
> - actimeo=30
> provisioner: kubernetes.io/azure-file
> reclaimPolicy: Delete
> volumeBindingMode: Immediate
>Reporter: Ivica Loncar
>Priority: Major
>
> On Azure AKS file persistent volume 
> (https://learn.microsoft.com/en-us/azure/aks/azure-files-csi) we've got 
> following exception:
>  
> org.apache.commons.io.IOExceptionList: 
> work/bef4a1a575c54ac099816b6babf4bde9/job-3418
>   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:330)
>   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1191)
>   at 
> com.xebialabs.xlrelease.remote.executor.k8s.KubeService.cleanWorkDir(KubeService.scala:107)
>   at 
> com.xebialabs.xlrelease.remote.executor.k8s.KubeJobExecutorService.cleanup(KubeJobExecutorService.scala:27)
>   at 
> com.xebialabs.xlrelease.remote.runner.JobRunnerActor.$anonfun$handleEvent$4(JobRunnerActor.scala:219)
>   at 
> com.xebialabs.xlrelease.remote.runner.JobRunnerActor.$anonfun$handleEvent$4$adapted(JobRunnerActor.scala:218)
>   at scala.Option.foreach(Option.scala:437)
>   at 
> com.xebialabs.xlrelease.remote.runner.JobRunnerActor.com$xebialabs$xlrelease$remote$runner$JobRunnerActor$$handleEvent(JobRunnerActor.scala:218)
>   at 
> com.xebialabs.xlrelease.remote.runner.JobRunnerActor$$anonfun$receiveRecover$1.$anonfun$applyOrElse$2(JobRunnerActor.scala:45)
>   at 
> com.xebialabs.xlrelease.remote.runner.JobRunnerActor$$anonfun$receiveRecover$1.$anonfun$applyOrElse$2$adapted(JobRunnerActor.scala:45)
>   at scala.Option.foreach(Option.scala:437)
>   at 
> com.xebialabs.xlrelease.remote.runner.JobRunnerActor$$anonfun$receiveRecover$1.applyOrElse(JobRunnerActor.scala:45)
>   at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:35)
>   at akka.event.LoggingReceive.apply(LoggingReceive.scala:96)
>   at akka.event.LoggingReceive.apply(LoggingReceive.scala:70)
>   at 
> akka.persistence.Eventsourced$$anon$2$$anonfun$1.applyOrElse(Eventsourced.scala:643)
>   at akka.actor.Actor.aroundReceive(Actor.scala:537)
>   at akka.actor.Actor.aroundReceive$(Actor.scala:535)
>   at 
> com.xebialabs.xlrelease.remote.runner.JobRunnerActor.akka$persistence$Eventsourced$$super$aroundReceive(JobRunnerActor.scala:22)
>   at 
> akka.persistence.Eventsourced$$anon$3.stateReceive(Eventsourced.scala:771)
>   at akka.persistence.Eventsourced.aroundReceive(Eventsourced.scala:245)
>   at akka.persistence.Eventsourced.aroundReceive$(Eventsourced.scala:244)
>   at 
> com.xebialabs.xlrelease.remote.runner.JobRunnerActor.aroundReceive(JobRunnerActor.scala:22)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:547)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:231)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
>   at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
>   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown 
> Source)
>   at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
>   at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
>   at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown 
> Source)
> Caused by: java.io.IOException: Cannot delete file: 
> work/bef4a1a575c54ac099816b6babf4bd

[jira] [Resolved] (IO-784) Add support for Appendable to HexDump util

2023-01-18 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved IO-784.

Fix Version/s: 2.12.0
   Resolution: Fixed

> Add support for Appendable to HexDump util
> --
>
> Key: IO-784
> URL: https://issues.apache.org/jira/browse/IO-784
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.11.0
>Reporter: Fredrik Kjellberg
>Priority: Major
> Fix For: 2.12.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The HexDump utility currently only supports to send the output of the hex 
> dump to an OutputStream. This makes it cumbersome if you want to e.g. output 
> the dump to System.out.
> The HexDump utility should support to send the output to an `Appendable` 
> making it possible to use e.g. `System.out` or `StringBuilder` as output 
> targets.
> Created PR with a proposed fix: 
> [https://github.com/apache/commons-io/pull/418]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-830) SFTP - moveto() throws FileSystemException: Could not set the last modified timestamp

2023-01-19 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17678639#comment-17678639
 ] 

Gary D. Gregory commented on VFS-830:
-

Hi [~salexander]
The best path forward would be to add a failing unit test to our SFTP tests in 
a PR on GitHub.

> SFTP - moveto() throws FileSystemException: Could not set the last modified 
> timestamp
> -
>
> Key: VFS-830
> URL: https://issues.apache.org/jira/browse/VFS-830
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
> Environment: RHEL Linux server connecting to AWS Transfer SFTP Server
>Reporter: Simon Alexander
>Priority: Minor
>
> I am uploading a file via a temp file, by using the following code:
>  
> {code:java}
> FileSystemOptions opts = createDefaultOptions();
> BytesIdentityInfo identityInfo = new BytesIdentityInfo(sshKey.getBytes(), 
> null);
> SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, 
> identityInfo);
> SftpFileSystemConfigBuilder.getInstance().setUserDirIsRoot(opts, false);
> SftpFileSystemConfigBuilder.getInstance().setSessionTimeout(opts, 
> Duration.ofMillis(1)); 
> SftpFileSystemConfigBuilder.getInstance().setDisableDetectExecChannel(opts, 
> true);
> // Create temp remote file object
> String tempFilePath = remoteFolder + FilePathSeparator + tempFileName;
> tempFileObject = remoteManager.resolveFile(new 
> URI("sftp",server.getServerInfo(),server.HostName,server.Port,tempFilePath,null,null).toString(),
>  opts);
> tempFileObject.copyFrom(sourceFileObject, Selectors.SELECT_SELF);
> // rename to the correct name 
> tempFileObject.moveTo(remoteFileObject);} {code}
> In this code, `sourceFileObject` is on a remote linux server; and 
> `tempFileObject` and `remoteFileObject` are on the AWS SFTP Transfer server.
> The exec channel is disabled on the server, so I've disabled its use here.
> When I run this code, the creation of the temp file runs successfully (using 
> `copyFrom()`), but then the `moveTo()` call fails with the following 
> exception:
> *java.io.IOException: copyFileBetweenServersUsingTempFile() - Could not set 
> the last modified timestamp of "testRemoteFileName.txt"*
>  
> I was trying to understand why the moveTo() call would fail in this way, so I 
> started digging into the Apache code. As far as I can see, the call to 
> `setLastModifiedTime()` only happens if the code thinks that the source and 
> target filesystems are different - see [commons-vfs/AbstractFileObject.java 
> at 83514069293cbf80644f1d47dd3eceaaf4e6954b · apache/commons-vfs · 
> GitHub|https://github.com/apache/commons-vfs/blob/83514069293cbf80644f1d47dd3eceaaf4e6954b/commons-vfs2/src/main/java/org/apache/commons/vfs2/provider/AbstractFileObject.java#L1726]
> {code:java}
> if (fileSystem == newfile.getFileSystem()) // canRenameTo()
> {
>  ...
> }
> else
> {
>  ...
> destFile.getContent().setLastModifiedTime(this.getContent().getLastModifiedTime());
> } {code}
> The issue, I think, is the `==` in the canRenameTo() method - because I am 
> actually moving from the temp file to the final file on the same file system, 
> which means this should be returning true not false, right? presumably we 
> should be using `.equals()` here, and overriding equals in the appropriate 
> type of `FileSystem` object?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-830) SFTP - moveto() throws FileSystemException: Could not set the last modified timestamp

2023-01-19 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17678704#comment-17678704
 ] 

Gary D. Gregory commented on VFS-830:
-

The best thing to do IMO is to create a new test class specific to your use 
case. What you found is older code IIRC that was based on Junit 3. You'll want 
your new test to use Junit 5. Unfortunately the tests have not all been updated 
to Junit 5 so they can be hard to follow.

> SFTP - moveto() throws FileSystemException: Could not set the last modified 
> timestamp
> -
>
> Key: VFS-830
> URL: https://issues.apache.org/jira/browse/VFS-830
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
> Environment: RHEL Linux server connecting to AWS Transfer SFTP Server
>Reporter: Simon Alexander
>Priority: Minor
>
> I am uploading a file via a temp file, by using the following code:
>  
> {code:java}
> FileSystemOptions opts = createDefaultOptions();
> BytesIdentityInfo identityInfo = new BytesIdentityInfo(sshKey.getBytes(), 
> null);
> SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, 
> identityInfo);
> SftpFileSystemConfigBuilder.getInstance().setUserDirIsRoot(opts, false);
> SftpFileSystemConfigBuilder.getInstance().setSessionTimeout(opts, 
> Duration.ofMillis(1)); 
> SftpFileSystemConfigBuilder.getInstance().setDisableDetectExecChannel(opts, 
> true);
> // Create temp remote file object
> String tempFilePath = remoteFolder + FilePathSeparator + tempFileName;
> tempFileObject = remoteManager.resolveFile(new 
> URI("sftp",server.getServerInfo(),server.HostName,server.Port,tempFilePath,null,null).toString(),
>  opts);
> tempFileObject.copyFrom(sourceFileObject, Selectors.SELECT_SELF);
> // rename to the correct name 
> tempFileObject.moveTo(remoteFileObject);} {code}
> In this code, `sourceFileObject` is on a remote linux server; and 
> `tempFileObject` and `remoteFileObject` are on the AWS SFTP Transfer server.
> The exec channel is disabled on the server, so I've disabled its use here.
> When I run this code, the creation of the temp file runs successfully (using 
> `copyFrom()`), but then the `moveTo()` call fails with the following 
> exception:
> *java.io.IOException: copyFileBetweenServersUsingTempFile() - Could not set 
> the last modified timestamp of "testRemoteFileName.txt"*
>  
> I was trying to understand why the moveTo() call would fail in this way, so I 
> started digging into the Apache code. As far as I can see, the call to 
> `setLastModifiedTime()` only happens if the code thinks that the source and 
> target filesystems are different - see [commons-vfs/AbstractFileObject.java 
> at 83514069293cbf80644f1d47dd3eceaaf4e6954b · apache/commons-vfs · 
> GitHub|https://github.com/apache/commons-vfs/blob/83514069293cbf80644f1d47dd3eceaaf4e6954b/commons-vfs2/src/main/java/org/apache/commons/vfs2/provider/AbstractFileObject.java#L1726]
> {code:java}
> if (fileSystem == newfile.getFileSystem()) // canRenameTo()
> {
>  ...
> }
> else
> {
>  ...
> destFile.getContent().setLastModifiedTime(this.getContent().getLastModifiedTime());
> } {code}
> The issue, I think, is the `==` in the canRenameTo() method - because I am 
> actually moving from the temp file to the final file on the same file system, 
> which means this should be returning true not false, right? presumably we 
> should be using `.equals()` here, and overriding equals in the appropriate 
> type of `FileSystem` object?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IO-552) FilenameUtils.concat fails if second argument (fullFilenameToAdd) starts with '~' (tilde)

2023-01-19 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/IO-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17678714#comment-17678714
 ] 

Gary D. Gregory commented on IO-552:


Note that ~ is a legal file name character on Windows.

> FilenameUtils.concat fails if second argument (fullFilenameToAdd) starts with 
> '~' (tilde)
> -
>
> Key: IO-552
> URL: https://issues.apache.org/jira/browse/IO-552
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.2, 2.5
> Environment: Windows 7 64bit, JavaVM 1.8 32bit
>Reporter: Jochen Tümmers
>Priority: Critical
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> {{FilenameUtils.concat("c:/temp", "~abc.txt") returns "~abc.txt/" instead of 
> "c:/temp/~abc.txt".}}
> As a result, the file would be created in the user's home directory instead 
> of c:/temp.
> (Note: I Had to replace all instances of double backslashes that would 
> normally appear in the java code with forward slashes as the editor cannot 
> handle backslashes properly.)
> commons io 2.2. and 2.5 behave the same. 2.3 and 2.4 not tested.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown chara

2023-01-20 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679127#comment-17679127
 ] 

Gary D. Gregory commented on COMPRESS-638:
--

No further comments?
I might implement _something_ , or not, unless someone pipes up, such that at 
least we get legal characters instead of junk.

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (DBCP-572) timed out connections remain active in the pool

2023-01-20 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved DBCP-572.
--
Resolution: Information Provided

> timed out connections remain active in the pool
> ---
>
> Key: DBCP-572
> URL: https://issues.apache.org/jira/browse/DBCP-572
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Erol Guven
>Priority: Major
>
> when the database does not respond within a time-out period, the connection 
> pool seems to still keep the connection in its pool and consider it active. 
> These active connections are never closed and seems to be kept in the pool 
> forever.
> To reproduce:
>  * create a BasicDataSource
>  * suspend the database process using {{kill -STOP}} signal
>  * get a connection from multiple threads simultaneously which will timeout
>  * inspect org.apache.commons.pool2.impl.GenericObjectPool.getNumActive()
> The active count is equivalent to the number of timed out connections.
> The active count never goes down.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (DBCP-590) BasicDataSource#setAbandonedUsageTracking has no effect

2023-01-20 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/DBCP-590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679153#comment-17679153
 ] 

Gary D. Gregory commented on DBCP-590:
--

Would reimplementing the method as below make sense (all tests pass), or, is 
there an implementation that would be cleaner that you can think of?
{code:java}
protected DataSource createDataSourceInstance() throws SQLException {
if (!getAbandonedUsageTracking()) {
final PoolingDataSource pds = new 
PoolingDataSource<>(connectionPool);

pds.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return pds;
}
// Workaround for https://issues.apache.org/jira/browse/DBCP-590
final GenericObjectPool connectionPool = 
getConnectionPool();
final JdkProxySource jdkProxySource = new 
JdkProxySource<>(getClass().getClassLoader(), new Class[] { Connection.class });
final ProxiedObjectPool proxiedConnectionPool = new 
ProxiedObjectPool<>(connectionPool, jdkProxySource);
final PoolingDataSource poolingDataSource = new 
PoolingDataSource<>(proxiedConnectionPool);

poolingDataSource.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return poolingDataSource;
}
{code}

> BasicDataSource#setAbandonedUsageTracking has no effect
> ---
>
> Key: DBCP-590
> URL: https://issues.apache.org/jira/browse/DBCP-590
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Réda Housni Alaoui
>Priority: Major
>
> Passing {{true}} to {{BasicDataSource#setAbandonedUsageTracking(boolean 
> usageTracking)}} has no effect because {{UsageTracking#use}} is never called.
> From what I found, {{usageTracking}} can only work if the object pool is of 
> type {{ProxiedObjectPool}} . Alas, BasicDataSource enforces 
> {{GenericObjectPool}} concrete type preventing us from overriding 
> {{BasicDataSource#createObjectPool}} to return a {{ProxiedObjectPool}} .
> Is there something I missed or a workaround?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DBCP-590) BasicDataSource#setAbandonedUsageTracking has no effect

2023-01-20 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/DBCP-590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679153#comment-17679153
 ] 

Gary D. Gregory edited comment on DBCP-590 at 1/20/23 1:54 PM:
---

[~psteitz] 

Would reimplementing the method as below make sense (all tests pass), or, is 
there an implementation that would be cleaner that you can think of?
{code:java}
protected DataSource createDataSourceInstance() throws SQLException {
if (!getAbandonedUsageTracking()) {
final PoolingDataSource pds = new 
PoolingDataSource<>(connectionPool);

pds.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return pds;
}
// Workaround for https://issues.apache.org/jira/browse/DBCP-590
final GenericObjectPool connectionPool = 
getConnectionPool();
final JdkProxySource jdkProxySource = new 
JdkProxySource<>(getClass().getClassLoader(), new Class[] { Connection.class });
final ProxiedObjectPool proxiedConnectionPool = new 
ProxiedObjectPool<>(connectionPool, jdkProxySource);
final PoolingDataSource poolingDataSource = new 
PoolingDataSource<>(proxiedConnectionPool);

poolingDataSource.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return poolingDataSource;
}
{code}


was (Author: garydgregory):
Would reimplementing the method as below make sense (all tests pass), or, is 
there an implementation that would be cleaner that you can think of?
{code:java}
protected DataSource createDataSourceInstance() throws SQLException {
if (!getAbandonedUsageTracking()) {
final PoolingDataSource pds = new 
PoolingDataSource<>(connectionPool);

pds.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return pds;
}
// Workaround for https://issues.apache.org/jira/browse/DBCP-590
final GenericObjectPool connectionPool = 
getConnectionPool();
final JdkProxySource jdkProxySource = new 
JdkProxySource<>(getClass().getClassLoader(), new Class[] { Connection.class });
final ProxiedObjectPool proxiedConnectionPool = new 
ProxiedObjectPool<>(connectionPool, jdkProxySource);
final PoolingDataSource poolingDataSource = new 
PoolingDataSource<>(proxiedConnectionPool);

poolingDataSource.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return poolingDataSource;
}
{code}

> BasicDataSource#setAbandonedUsageTracking has no effect
> ---
>
> Key: DBCP-590
> URL: https://issues.apache.org/jira/browse/DBCP-590
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Réda Housni Alaoui
>Priority: Major
>
> Passing {{true}} to {{BasicDataSource#setAbandonedUsageTracking(boolean 
> usageTracking)}} has no effect because {{UsageTracking#use}} is never called.
> From what I found, {{usageTracking}} can only work if the object pool is of 
> type {{ProxiedObjectPool}} . Alas, BasicDataSource enforces 
> {{GenericObjectPool}} concrete type preventing us from overriding 
> {{BasicDataSource#createObjectPool}} to return a {{ProxiedObjectPool}} .
> Is there something I missed or a workaround?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DBCP-590) BasicDataSource#setAbandonedUsageTracking has no effect

2023-01-20 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/DBCP-590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679153#comment-17679153
 ] 

Gary D. Gregory edited comment on DBCP-590 at 1/20/23 1:55 PM:
---

[~psteitz] 

Would reimplementing the BasicDataSource method as below make sense (all tests 
pass), or, is there an implementation that would be cleaner that you can think 
of?
{code:java}
protected DataSource createDataSourceInstance() throws SQLException {
if (!getAbandonedUsageTracking()) {
final PoolingDataSource pds = new 
PoolingDataSource<>(connectionPool);

pds.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return pds;
}
// Workaround for https://issues.apache.org/jira/browse/DBCP-590
final GenericObjectPool connectionPool = 
getConnectionPool();
final JdkProxySource jdkProxySource = new 
JdkProxySource<>(getClass().getClassLoader(), new Class[] { Connection.class });
final ProxiedObjectPool proxiedConnectionPool = new 
ProxiedObjectPool<>(connectionPool, jdkProxySource);
final PoolingDataSource poolingDataSource = new 
PoolingDataSource<>(proxiedConnectionPool);

poolingDataSource.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return poolingDataSource;
}
{code}


was (Author: garydgregory):
[~psteitz] 

Would reimplementing the method as below make sense (all tests pass), or, is 
there an implementation that would be cleaner that you can think of?
{code:java}
protected DataSource createDataSourceInstance() throws SQLException {
if (!getAbandonedUsageTracking()) {
final PoolingDataSource pds = new 
PoolingDataSource<>(connectionPool);

pds.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return pds;
}
// Workaround for https://issues.apache.org/jira/browse/DBCP-590
final GenericObjectPool connectionPool = 
getConnectionPool();
final JdkProxySource jdkProxySource = new 
JdkProxySource<>(getClass().getClassLoader(), new Class[] { Connection.class });
final ProxiedObjectPool proxiedConnectionPool = new 
ProxiedObjectPool<>(connectionPool, jdkProxySource);
final PoolingDataSource poolingDataSource = new 
PoolingDataSource<>(proxiedConnectionPool);

poolingDataSource.setAccessToUnderlyingConnectionAllowed(isAccessToUnderlyingConnectionAllowed());
return poolingDataSource;
}
{code}

> BasicDataSource#setAbandonedUsageTracking has no effect
> ---
>
> Key: DBCP-590
> URL: https://issues.apache.org/jira/browse/DBCP-590
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Réda Housni Alaoui
>Priority: Major
>
> Passing {{true}} to {{BasicDataSource#setAbandonedUsageTracking(boolean 
> usageTracking)}} has no effect because {{UsageTracking#use}} is never called.
> From what I found, {{usageTracking}} can only work if the object pool is of 
> type {{ProxiedObjectPool}} . Alas, BasicDataSource enforces 
> {{GenericObjectPool}} concrete type preventing us from overriding 
> {{BasicDataSource#createObjectPool}} to return a {{ProxiedObjectPool}} .
> Is there something I missed or a workaround?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (DBCP-570) Oracle transaction issue

2023-01-20 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed DBCP-570.

Resolution: Information Provided

> Oracle transaction issue
> 
>
> Key: DBCP-570
> URL: https://issues.apache.org/jira/browse/DBCP-570
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Aaron Ogburn
>Priority: Major
> Attachments: test.zip
>
>
> A failure can be seen when using two connections from different DBCP pools to 
> access Oracle in a single transaction. The pools must access the same 
> database server/SID but the users must be different. In such cases, Oracle 
> has issues that result in a failure at the point of connection enlistment:
> {code:java}
>  ... WARN [main] sun.reflect.NativeMethodAccessorImpl.invoke0 ARJUNA016089: 
> TransactionImple.enlistResource - xa_start - caught: XAException.XAER_RMERR 
> for < formatId=131077, gtrid_length=29, bqual_length=36, 
> tx_uid=0:0a000264:a0d3:5fdbca1d:a, node_name=1, 
> branch_uid=0:0a000264:a0d3:5fdbca1d:c, subordinatenodename=null, 
> eis_name=0 >
>  oracle.jdbc.xa.OracleXAException: XAErr (-3): A resource manager error has 
> occured in the transaction branch. ORA-24774 SQLErr (0)
>  at oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1092)
>  at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:272)
>  at 
> com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:741)
>  at 
> com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:423)
>  at 
> org.apache.tomcat.dbcp.dbcp2.managed.TransactionContext.setSharedConnection(TransactionContext.java:109)
>  at 
> org.apache.tomcat.dbcp.dbcp2.managed.ManagedConnection.updateTransactionStatus(ManagedConnection.java:157)
>  at 
> org.apache.tomcat.dbcp.dbcp2.managed.ManagedConnection.(ManagedConnection.java:75)
>  at 
> org.apache.tomcat.dbcp.dbcp2.managed.ManagedDataSource.getConnection(ManagedDataSource.java:80)
>  at 
> org.apache.tomcat.dbcp.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
>  at 
> support.jboss.web.TransactionTest.executeSQL(TransactionTest.java:84){code}
>  We also verified that the explanation/presumed cause is correct by using a 
> Byteman based workaround (which is probably *too* aggressive for some cases 
> but does work around the problem scenario) that simulates what is done by 
> IronJacamar and what is suggested/discussed in [1] - i.e. force the 
> OracleXAResource "true" return for isSameRM to be "false" (instead) via a 
> proxy for the OracleXAResource implementation.
> {code:java}
> RULE oracle.jdbc.xa.OracleXAResource.isSameRM.FALSE
> CLASS oracle.jdbc.xa.OracleXAResource
> METHOD isSameRM
> AT ENTRY
> IF true
> DO System.err.println("[BMAN] Overriding Oracle isSameRM ...");
>  return false;
> ENDRULE{code}
> Is it possible for DBCP to better handle this Oracle specific scenario?
> [1] 
> [http://www.tomee-openejb.979440.n4.nabble.com/Oracle-XA-with-different-credentials-fails-td4680456.html]
> [2] 
> [https://community.oracle.com/tech/developers/discussion/1058320/ora-24774-why-does-xa-start-fails-for-2-txn-branches-on-same-instance]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-830) SFTP - moveto() throws FileSystemException: Could not set the last modified timestamp

2023-01-20 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679305#comment-17679305
 ] 

Gary D. Gregory commented on VFS-830:
-

If you can provide a PR I can review over the weekend, otherwise, it will be on 
my back-back-burner.

> SFTP - moveto() throws FileSystemException: Could not set the last modified 
> timestamp
> -
>
> Key: VFS-830
> URL: https://issues.apache.org/jira/browse/VFS-830
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
> Environment: RHEL Linux server connecting to AWS Transfer SFTP Server
>Reporter: Simon Alexander
>Priority: Minor
>
> I am uploading a file via a temp file, by using the following code:
>  
> {code:java}
> FileSystemOptions opts = createDefaultOptions();
> BytesIdentityInfo identityInfo = new BytesIdentityInfo(sshKey.getBytes(), 
> null);
> SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, 
> identityInfo);
> SftpFileSystemConfigBuilder.getInstance().setUserDirIsRoot(opts, false);
> SftpFileSystemConfigBuilder.getInstance().setSessionTimeout(opts, 
> Duration.ofMillis(1)); 
> SftpFileSystemConfigBuilder.getInstance().setDisableDetectExecChannel(opts, 
> true);
> // Create temp remote file object
> String tempFilePath = remoteFolder + FilePathSeparator + tempFileName;
> tempFileObject = remoteManager.resolveFile(new 
> URI("sftp",server.getServerInfo(),server.HostName,server.Port,tempFilePath,null,null).toString(),
>  opts);
> tempFileObject.copyFrom(sourceFileObject, Selectors.SELECT_SELF);
> // rename to the correct name 
> tempFileObject.moveTo(remoteFileObject);} {code}
> In this code, `sourceFileObject` is on a remote linux server; and 
> `tempFileObject` and `remoteFileObject` are on the AWS SFTP Transfer server.
> The exec channel is disabled on the server, so I've disabled its use here.
> When I run this code, the creation of the temp file runs successfully (using 
> `copyFrom()`), but then the `moveTo()` call fails with the following 
> exception:
> *java.io.IOException: copyFileBetweenServersUsingTempFile() - Could not set 
> the last modified timestamp of "testRemoteFileName.txt"*
>  
> I was trying to understand why the moveTo() call would fail in this way, so I 
> started digging into the Apache code. As far as I can see, the call to 
> `setLastModifiedTime()` only happens if the code thinks that the source and 
> target filesystems are different - see [commons-vfs/AbstractFileObject.java 
> at 83514069293cbf80644f1d47dd3eceaaf4e6954b · apache/commons-vfs · 
> GitHub|https://github.com/apache/commons-vfs/blob/83514069293cbf80644f1d47dd3eceaaf4e6954b/commons-vfs2/src/main/java/org/apache/commons/vfs2/provider/AbstractFileObject.java#L1726]
> {code:java}
> if (fileSystem == newfile.getFileSystem()) // canRenameTo()
> {
>  ...
> }
> else
> {
>  ...
> destFile.getContent().setLastModifiedTime(this.getContent().getLastModifiedTime());
> } {code}
> The issue, I think, is the `==` in the canRenameTo() method - because I am 
> actually moving from the temp file to the final file on the same file system, 
> which means this should be returning true not false, right? presumably we 
> should be using `.equals()` here, and overriding equals in the appropriate 
> type of `FileSystem` object?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the file name. If the file name contains non-ISO_8859_1 characters, some unknown chara

2023-01-20 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679352#comment-17679352
 ] 

Gary D. Gregory commented on COMPRESS-638:
--

Sure, I'll take a look this weekend but where is your proposal? A file name 
can't have percent signs in it on Windows IIRC. RE logging, I do not think ower 
level, components like Commons should log, so I'd rather not add more logging. 

> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name.  If the file name contains non-ISO_8859_1 characters, 
> some unknown characters are displayed after decompression.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the file name and comment. If the strings contains non-ISO_8859_1 characters, unknown characters

2023-01-21 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated COMPRESS-638:
-
Summary: The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to 
write the file name and comment.  If the strings contains non-ISO_8859_1 
characters, unknown characters are displayed after decompression. Use percent 
encoding for non ISO_8859_1 characters.  (was: The 
GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to write the 
file name.  If the file name contains non-ISO_8859_1 characters, some unknown 
characters are displayed after decompression.)

> The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the 
> file name and comment.  If the strings contains non-ISO_8859_1 characters, 
> unknown characters are displayed after decompression. Use percent encoding 
> for non ISO_8859_1 characters.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the file name and comment. If the strings contains non-ISO_8859_1 characters, unknown characters

2023-01-21 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved COMPRESS-638.
--
Fix Version/s: 1.23
   Resolution: Fixed

> The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the 
> file name and comment.  If the strings contains non-ISO_8859_1 characters, 
> unknown characters are displayed after decompression. Use percent encoding 
> for non ISO_8859_1 characters.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Fix For: 1.23
>
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the file name and comment. If the strings contains non-ISO_8859_1 characters, unknown character

2023-01-21 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679451#comment-17679451
 ] 

Gary D. Gregory commented on COMPRESS-638:
--

Percent-encoding is now used for non-ISO_8859_1 characters.

Please see git master or a snapshot build here: 
https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.23-SNAPSHOT/

This should be considered a workaround IMO and we could change the 
percent-encoding format to something else in the future if needed.

There is no roundtrip back to the non-ISO_8859_1 characters when reading a GZip 
since we cannot tell what the intent of the file name bytes really is, unless 
we used some special marker, which is possible in the future in suppose.


> The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the 
> file name and comment.  If the strings contains non-ISO_8859_1 characters, 
> unknown characters are displayed after decompression. Use percent encoding 
> for non ISO_8859_1 characters.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Fix For: 1.23
>
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the file name and comment. If the strings contains non-ISO_8859_1 characters, unknown char

2023-01-21 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679451#comment-17679451
 ] 

Gary D. Gregory edited comment on COMPRESS-638 at 1/21/23 2:29 PM:
---

Percent-encoding is now used for non-ISO_8859_1 characters.

Please see git master or a snapshot build here: 
https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.23-SNAPSHOT/

This should be considered a workaround IMO and we could change the 
percent-encoding format to something else in the future if needed.

There is no roundtrip back to the non-ISO_8859_1 characters when reading a GZip 
since we cannot tell what the intent of the file name bytes really are unless 
we used some special marker, which is possible in the future in suppose.



was (Author: garydgregory):
Percent-encoding is now used for non-ISO_8859_1 characters.

Please see git master or a snapshot build here: 
https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.23-SNAPSHOT/

This should be considered a workaround IMO and we could change the 
percent-encoding format to something else in the future if needed.

There is no roundtrip back to the non-ISO_8859_1 characters when reading a GZip 
since we cannot tell what the intent of the file name bytes really is, unless 
we used some special marker, which is possible in the future in suppose.


> The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the 
> file name and comment.  If the strings contains non-ISO_8859_1 characters, 
> unknown characters are displayed after decompression. Use percent encoding 
> for non ISO_8859_1 characters.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Fix For: 1.23
>
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-638) The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the file name and comment. If the strings contains non-ISO_8859_1 characters, unknown char

2023-01-21 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679451#comment-17679451
 ] 

Gary D. Gregory edited comment on COMPRESS-638 at 1/21/23 2:29 PM:
---

Percent-encoding is now used for non-ISO_8859_1 characters.

Please see git master or a snapshot build here: 
https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.23-SNAPSHOT/

This should be considered a workaround IMO and we could change the 
percent-encoding format to something else in the future if needed.

There is no roundtrip back to the non-ISO_8859_1 characters when reading a GZip 
since we cannot tell what the intent of the file name bytes really is unless we 
used some special marker, which is possible in the future in suppose.



was (Author: garydgregory):
Percent-encoding is now used for non-ISO_8859_1 characters.

Please see git master or a snapshot build here: 
https://repository.apache.org/content/repositories/snapshots/org/apache/commons/commons-compress/1.23-SNAPSHOT/

This should be considered a workaround IMO and we could change the 
percent-encoding format to something else in the future if needed.

There is no roundtrip back to the non-ISO_8859_1 characters when reading a GZip 
since we cannot tell what the intent of the file name bytes really are unless 
we used some special marker, which is possible in the future in suppose.


> The GzipCompressorOutputStream#writeHeader() uses ISO_8859_1 to write the 
> file name and comment.  If the strings contains non-ISO_8859_1 characters, 
> unknown characters are displayed after decompression. Use percent encoding 
> for non ISO_8859_1 characters.
> --
>
> Key: COMPRESS-638
> URL: https://issues.apache.org/jira/browse/COMPRESS-638
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Radar wen
>Priority: Major
> Fix For: 1.23
>
> Attachments: 0110.png
>
>
> The GzipCompressorOutputStream#writeHeader method uses the ISO_8859_1 to 
> write the file name. 
> If the file name contains non-ISO_8859_1 characters, some unknown characters 
> are displayed after decompression. !0110.png!
>  Can change the ISO_8859_1 to UTF-8? 
>         if (filename != null) {
>             out.write(filename.getBytes(ISO_8859_1));
>             out.write(0);
>         }
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CSV-141) Handle malformed CSV files

2023-01-22 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CSV-141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679556#comment-17679556
 ] 

Gary D. Gregory commented on CSV-141:
-

Please see 
https://github.com/apache/commons-csv/pull/295#issuecomment-1399319422

> Handle malformed CSV files
> --
>
> Key: CSV-141
> URL: https://issues.apache.org/jira/browse/CSV-141
> Project: Commons CSV
>  Issue Type: Wish
>  Components: Parser
>Affects Versions: 1.0
>Reporter: Nguyen Minh
>Priority: Minor
> Fix For: 1.x
>
> Attachments: image-2023-01-04-14-00-57-158.png
>
>
> My java application has to handle thousands of CSV files uploaded by the 
> client phones everyday. So, there some CSV files have the wrong format which 
> I'm not sure why.
> Here is my sample CSV. Microsoft Excel parses it correctly, but both Common 
> CSV and OpenCSV can't parse it. Open CSV can't parse line 2 (due to '\' 
> character) and Common CSV will crash on line 3 and 4:
> "1414770317901","android.widget.EditText","pass sem1 _84*|*","0","pass sem1 
> _8"
> "1414770318470","android.widget.EditText","pass sem1 _84:*|*","0","pass sem1 
> _84:\"
> "1414770318327","android.widget.EditText","pass sem1 
> "1414770318628","android.widget.EditText","pass sem1 _84*|*","0","pass sem1
> Line 3: java.io.IOException: (line 5) invalid char between encapsulated token 
> and delimiter
>   at org.apache.commons.csv.CSVParser$1.getNextRecord(CSVParser.java:398)
>   at org.apache.commons.csv.CSVParser$1.hasNext(CSVParser.java:407)
> Line 4: java.io.IOException: (startline 5) EOF reached before encapsulated 
> token finished
>   at org.apache.commons.csv.CSVParser$1.getNextRecord(CSVParser.java:398)
>   at org.apache.commons.csv.CSVParser$1.hasNext(CSVParser.java:407)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (COMPRESS-639) Crash when adding multiple files with the same path and when Zip64Mode is in use.

2023-01-23 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated COMPRESS-639:
-
Affects Version/s: (was: 1.23)

> Crash when adding multiple files with the same path and when Zip64Mode is in 
> use.
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.mockito.internal.runners.DefaultInternalRunner$1$1.evaluate(DefaultInternalRunner.java:55)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRun

[jira] [Updated] (COMPRESS-639) NullPointerException when adding multiple files with the same path with Zip64Mode

2023-01-23 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated COMPRESS-639:
-
Summary: NullPointerException when adding multiple files with the same path 
with Zip64Mode  (was: NullPointerException when adding multiple files with the 
same path and when Zip64Mode is in use.)

> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.mockito.internal.runners.DefaultInternalRunner$1$1.evaluate(DefaultInternalR

[jira] [Updated] (COMPRESS-639) NullPointerException when adding multiple files with the same path and when Zip64Mode is in use.

2023-01-23 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated COMPRESS-639:
-
Summary: NullPointerException when adding multiple files with the same path 
and when Zip64Mode is in use.  (was: Crash when adding multiple files with the 
same path and when Zip64Mode is in use.)

> NullPointerException when adding multiple files with the same path and when 
> Zip64Mode is in use.
> 
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.mockito.internal.runners.DefaultInternalRunner

[jira] [Commented] (COMPRESS-639) NullPointerException when adding multiple files with the same path with Zip64Mode

2023-01-23 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679769#comment-17679769
 ] 

Gary D. Gregory commented on COMPRESS-639:
--

[~agawron]

The Javadoc for {{ZipArchiveOutputStream}} documents the class as 
{{@NotThreadSafe}}.

I added and disabled the new test: 
{{org.apache.commons.compress.archivers.zip.ParallelScatterZipCreatorTest.sameZipArchiveEntryNotThreadSafe()}}.

Feel free to provide a PR on GitHub if you want to implement the feature.


> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:

[jira] [Updated] (COMPRESS-639) NullPointerException when adding multiple files with the same path with Zip64Mode

2023-01-23 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated COMPRESS-639:
-
Issue Type: New Feature  (was: Bug)

> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.mockito.internal.runners.DefaultInternalRunner$1$1.evaluate(DefaultInternalRunner.java:55)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit

[jira] [Commented] (CONFIGURATION-827) Enable to set custom AutoSaveListener or register FileHandlerListenerAdapter

2023-01-23 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CONFIGURATION-827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679774#comment-17679774
 ] 

Gary D. Gregory commented on CONFIGURATION-827:
---

Hello [~AllaG] 

Thank you for your report.

Please feel free to create a PR on GitHub so we can see what your exact 
proposal is.

TY!

> Enable to set custom AutoSaveListener or register FileHandlerListenerAdapter
> 
>
> Key: CONFIGURATION-827
> URL: https://issues.apache.org/jira/browse/CONFIGURATION-827
> Project: Commons Configuration
>  Issue Type: Bug
>  Components: File reloading
>Affects Versions: 2.8.0
>Reporter: Alla Gofman
>Priority: Major
>
> I would like to extend AutoSaveListener (which package private)
> 1) override onEvent(final ConfigurationEvent event) behavior and set the 
> custom listener in FileBasedConfigurationBuilder.
> 2) register custom AutoSaveListener for saving() event.
> Both unavailable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CRYPTO-156) Common Class Padding, Transform and AlgorithmMode

2023-01-23 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CRYPTO-156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CRYPTO-156:
---
Fix Version/s: 1.2.1
   (was: 1.2.0)

> Common Class Padding, Transform and AlgorithmMode
> -
>
> Key: CRYPTO-156
> URL: https://issues.apache.org/jira/browse/CRYPTO-156
> Project: Commons Crypto
>  Issue Type: Improvement
>  Components: Cipher
>Affects Versions: 1.1.0
>Reporter: Arturo Bernal
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.2.1
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> In order to avoid duplicate code and try to unify the transformation of the 
> token i think it's necessary create the next class/enum:
>  * {{OpenSslTransform}} --> Utility code for dealing with different algorithm 
> types
>  * {{OpenSslPadding}} --> Containg the enumeration of Cipher Algorithm Padding
>  * {{OpenSslAlgorithmMode}} --> Enumeration of Algorithm Mode.
> [https://github.com/apache/commons-crypto/blob/master/src/main/java/org/apache/commons/crypto/cipher/OpenSsl.java#L208]
>  
> [https://github.com/apache/commons-crypto/blob/master/src/main/java/org/apache/commons/crypto/jna/OpenSslJnaCipher.java#L422]
>  
> [https://github.com/apache/commons-crypto/blob/master/src/main/java/org/apache/commons/crypto/cipher/OpenSsl.java#L47]
>  
> [https://github.com/apache/commons-crypto/blob/master/src/main/java/org/apache/commons/crypto/jna/OpenSslJnaCipher.java#L399]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-639) NullPointerException when adding multiple files with the same path with Zip64Mode

2023-01-23 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680061#comment-17680061
 ] 

Gary D. Gregory commented on COMPRESS-639:
--

Ah, ok, then it's not a typical race condition. I'll take a look.

> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.mockito.internal.runners.DefaultInternalRunner$1$1.evaluate(DefaultInternalRunner.java:55)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentR

[jira] [Commented] (COMPRESS-639) NullPointerException when adding multiple files with the same path with Zip64Mode

2023-01-23 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680072#comment-17680072
 ] 

Gary D. Gregory commented on COMPRESS-639:
--

What looks worrisome but is not directly related is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.mockito.internal.runners.DefaultInternal

[jira] [Comment Edited] (COMPRESS-639) NullPointerException when adding multiple files with the same path with Zip64Mode

2023-01-23 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680072#comment-17680072
 ] 

Gary D. Gregory edited comment on COMPRESS-639 at 1/24/23 1:53 AM:
---

What looks worrisome is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

If Implement the contract in the standard style (using all fields for both 
methods), the test passes. Needs more study...


was (Author: garydgregory):
What looks worrisome but is not directly related is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(Refl

[jira] [Comment Edited] (COMPRESS-639) NullPointerException when adding multiple files with the same path with Zip64Mode

2023-01-23 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680072#comment-17680072
 ] 

Gary D. Gregory edited comment on COMPRESS-639 at 1/24/23 1:54 AM:
---

What looks worrisome is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

If Implement the contract in the standard style (using all fields for both 
methods), the test passes sometimes. Needs more study...


was (Author: garydgregory):
What looks worrisome is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

If Implement the contract in the standard style (using all fields for both 
methods), the test passes. Needs more study...

> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.FrameworkMethod$1.runRef

[jira] [Comment Edited] (COMPRESS-639) NullPointerException when adding multiple files with the same path with Zip64Mode

2023-01-23 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680072#comment-17680072
 ] 

Gary D. Gregory edited comment on COMPRESS-639 at 1/24/23 1:56 AM:
---

What looks worrisome is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

If Implement the contract in the standard style (using all fields for both 
methods), the test passes but others fail. Needs more study...


was (Author: garydgregory):
What looks worrisome is that 
{{org.apache.commons.compress.archivers.zip.ZipArchiveEntry}} breaks the 
{{hashCode()}} \ {{equals()}} contract.

If Implement the contract in the standard style (using all fields for both 
methods), the test passes sometimes. Needs more study...

> NullPointerException when adding multiple files with the same path with 
> Zip64Mode
> -
>
> Key: COMPRESS-639
> URL: https://issues.apache.org/jira/browse/COMPRESS-639
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers, Compressors
>Affects Versions: 1.21, 1.22
> Environment: Tested on MacBook Pro 2019 (2.6 GHz 6-Core Intel Core 
> i7, 32GB DDR4) 
> MacOS 13.1
> JDK 11.0.13
> Tested with commons-compress 1.21, 1.22 and 1.23-SNAPSHOT
>Reporter: Andrew Gawron
>Priority: Major
>
> Crash when adding 2 zip entries to a large archive. The entries had the same 
> name.
> After the investigation we found out that ZipArchiveOutputStream has a race 
> condition. When adding two entries with the same entry name an entry is being 
> added to _entries_ LinkedList and then again it is being added to _metaData_ 
> HashMap. If the modification time ({_}race condition{_} here), name and other 
> params are the same then the metaData is not being updated for the second 
> entry. Then when createCentralFileHeader iterates over _entries_ the first 
> entry is being found in metaData keyset. It gets modified later by adding 
> extras. Then second entry tries to find its metadata but it fails because 
> metaData key has been changed.
> Potential solution: container keys should be immutable and they should not be 
> modified after being added to the container.
> Sample code that triggers exception:
> {code:java}
> @Test
>public void shouldThrowDueToRaceConditionInZipArchiveOutputStream() throws 
> IOException, ExecutionException, InterruptedException {
>var testOutputStream = new ByteArrayOutputStream();
>String fileContent = "A";
>final int NUM_OF_FILES = 100;
>var inputStreams = new LinkedList();
>for (int i = 0; i < NUM_OF_FILES; i++) {
>inputStreams.add(new 
> ByteArrayInputStream(fileContent.getBytes(StandardCharsets.UTF_8)));
>}
>var zipCreator = new ParallelScatterZipCreator();
>var zipArchiveOutputStream = new 
> ZipArchiveOutputStream(testOutputStream);
>zipArchiveOutputStream.setUseZip64(Zip64Mode.Always);
>for (int i = 0; i < inputStreams.size(); i++) {
>ZipArchiveEntry zipArchiveEntry = new 
> ZipArchiveEntry("./dir/myfile.txt");
>zipArchiveEntry.setMethod(ZipEntry.DEFLATED);
>final var inputStream = inputStreams.get(i);
>zipCreator.addArchiveEntry(zipArchiveEntry, () -> inputStream);
>}
>zipCreator.writeTo(zipArchiveOutputStream);
>zipArchiveOutputStream.close(); // it will throw NullPointerException 
> here
>}  {code}
> Exception:
> {code:java}
> /* java.lang.NullPointerException at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream$EntryMetaData.access$800(ZipArchiveOutputStream.java:1998)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.createCentralFileHeader(ZipArchiveOutputStream.java:1356)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.writeCentralDirectoryInChunks(ZipArchiveOutputStream.java:580)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.finish(ZipArchiveOutputStream.java:546)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveOutputStream.close(ZipArchiveOutputStream.java:1090)
>  at 
> com.xxx.yyy.impl.backuprestore.backup.container.StreamZipWriterTest.shouldThrowDueToRaceConditionInZipArchiveOutputStream(StreamZipWriterTest.java:130)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.junit.runners.model.Framewor

[jira] [Commented] (CONFIGURATION-828) Configuration file truncated then there isn't enough disk space for writing

2023-01-24 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CONFIGURATION-828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680190#comment-17680190
 ] 

Gary D. Gregory commented on CONFIGURATION-828:
---

I agree that this feels out of scope.

> Configuration file truncated then there isn't enough disk space for writing
> ---
>
> Key: CONFIGURATION-828
> URL: https://issues.apache.org/jira/browse/CONFIGURATION-828
> Project: Commons Configuration
>  Issue Type: Bug
>  Components: File reloading
>Affects Versions: 2.8.0
>Reporter: Alla Gofman
>Priority: Major
>
> Please consider:
> 1) Not to create output stream to file in case there is not enough disk space 
> for writing it's content in case of save.
> 2) Enable to register some FileHandlerListener in this case to be notified on 
> such issue and can react by throwing some Exception. 
> The FileHandlerListener.saving(FileHandler handler) method not suitable, 
> because called after the file being already truncated.
> * Meanwhile, the workaround is to extend 
> DefaultFileSystem.getOutputStream(File file) method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-636) Add Builder Class for TarEntry

2023-01-24 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680202#comment-17680202
 ] 

Gary D. Gregory commented on COMPRESS-636:
--

[~belugabehr] 

Feel free to provide a PR on GitHub, with tests of course ;)

> Add Builder Class for TarEntry
> --
>
> Key: COMPRESS-636
> URL: https://issues.apache.org/jira/browse/COMPRESS-636
> Project: Commons Compress
>  Issue Type: New Feature
>Reporter: David Mollitor
>Priority: Major
>
> When looking at the TAR documentation, the suggested usage is:
>  
> {code:java}
> TarArchiveEntry entry = new TarArchiveEntry(name);
> entry.setSize(size);
> tarOutput.putArchiveEntry(entry);
> tarOutput.write(contentOfEntry);
> tarOutput.closeArchiveEntry();{code}
> [https://commons.apache.org/proper/commons-compress/examples.html]
>  
> In this use case, it would be nice if the TarArchiveEntry class took a _size_ 
> parameter as part of the constructor, but even better would be a Builder 
> class for general ArchiveEntry creation. 
> For example:
> {code:java}
> ArchiveEntryBuilder.newTarArchiveEntry().setName(name).setSize(size).build() 
> {code}
> One could imagine a different ArchiveEntryBuilder.newXxxArchiveEntry() for 
> each type available in the commons compress package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-636) Add Builder Class for TarEntry

2023-01-24 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680202#comment-17680202
 ] 

Gary D. Gregory edited comment on COMPRESS-636 at 1/24/23 12:51 PM:


[~belugabehr] 

Feel free to provide a PR on GitHub, with tests of course ;)

I've seen this pattern plenty:
TarArchiveEntry entry = 
TarArchiveEntry.builder().setName(name).setSize(size).build()


was (Author: garydgregory):
[~belugabehr] 

Feel free to provide a PR on GitHub, with tests of course ;)

> Add Builder Class for TarEntry
> --
>
> Key: COMPRESS-636
> URL: https://issues.apache.org/jira/browse/COMPRESS-636
> Project: Commons Compress
>  Issue Type: New Feature
>Reporter: David Mollitor
>Priority: Major
>
> When looking at the TAR documentation, the suggested usage is:
>  
> {code:java}
> TarArchiveEntry entry = new TarArchiveEntry(name);
> entry.setSize(size);
> tarOutput.putArchiveEntry(entry);
> tarOutput.write(contentOfEntry);
> tarOutput.closeArchiveEntry();{code}
> [https://commons.apache.org/proper/commons-compress/examples.html]
>  
> In this use case, it would be nice if the TarArchiveEntry class took a _size_ 
> parameter as part of the constructor, but even better would be a Builder 
> class for general ArchiveEntry creation. 
> For example:
> {code:java}
> ArchiveEntryBuilder.newTarArchiveEntry().setName(name).setSize(size).build() 
> {code}
> One could imagine a different ArchiveEntryBuilder.newXxxArchiveEntry() for 
> each type available in the commons compress package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-636) Add Builder Class for TarEntry

2023-01-24 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680202#comment-17680202
 ] 

Gary D. Gregory edited comment on COMPRESS-636 at 1/24/23 12:51 PM:


[~belugabehr] 

Feel free to provide a PR on GitHub, with tests of course ;)

I've seen this pattern plenty:
{code:java}
TarArchiveEntry entry = 
TarArchiveEntry.builder().setName(name).setSize(size).build();
{code}


was (Author: garydgregory):
[~belugabehr] 

Feel free to provide a PR on GitHub, with tests of course ;)

I've seen this pattern plenty:
TarArchiveEntry entry = 
TarArchiveEntry.builder().setName(name).setSize(size).build()

> Add Builder Class for TarEntry
> --
>
> Key: COMPRESS-636
> URL: https://issues.apache.org/jira/browse/COMPRESS-636
> Project: Commons Compress
>  Issue Type: New Feature
>Reporter: David Mollitor
>Priority: Major
>
> When looking at the TAR documentation, the suggested usage is:
>  
> {code:java}
> TarArchiveEntry entry = new TarArchiveEntry(name);
> entry.setSize(size);
> tarOutput.putArchiveEntry(entry);
> tarOutput.write(contentOfEntry);
> tarOutput.closeArchiveEntry();{code}
> [https://commons.apache.org/proper/commons-compress/examples.html]
>  
> In this use case, it would be nice if the TarArchiveEntry class took a _size_ 
> parameter as part of the constructor, but even better would be a Builder 
> class for general ArchiveEntry creation. 
> For example:
> {code:java}
> ArchiveEntryBuilder.newTarArchiveEntry().setName(name).setSize(size).build() 
> {code}
> One could imagine a different ArchiveEntryBuilder.newXxxArchiveEntry() for 
> each type available in the commons compress package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-634) Pack200 Fails to Pack Jars with Module Declarations

2023-01-24 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680210#comment-17680210
 ] 

Gary D. Gregory commented on COMPRESS-634:
--

[~bmarcaur] 

I am guessing there is more to do than update test fixtures. Probably adding 
support so that when the module info is read, it can be written out again.

Feel free to create a PR on GitHub. This is likely to be not trivial.

 

> Pack200 Fails to Pack Jars with Module Declarations
> ---
>
> Key: COMPRESS-634
> URL: https://issues.apache.org/jira/browse/COMPRESS-634
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.21
>Reporter: Brandon Marc-Aurele
>Priority: Major
>
> Using commons-compress version 1.21+ (move to native pack algorithm) fails to 
> pack jars with module declarations. It fails with the following 
> error/stacktrace:
> {code:java}
> java.lang.UnsupportedOperationException: Module requires ASM6
>     at org.objectweb.asm.ClassVisitor.visitModule(ClassVisitor.java:153)
>     at 
> org.objectweb.asm.ClassReader.readModuleAttributes(ClassReader.java:781)
>     at org.objectweb.asm.ClassReader.accept(ClassReader.java:580)
>     at 
> org.apache.commons.compress.harmony.pack200.Segment.processClasses(Segment.java:160)
>     at 
> org.apache.commons.compress.harmony.pack200.Segment.pack(Segment.java:110)
>     at 
> org.apache.commons.compress.harmony.pack200.Archive.doNormalPack(Archive.java:128)
>     at 
> org.apache.commons.compress.harmony.pack200.Archive.pack(Archive.java:98) 
> {code}
> [Here|https://github.com/bmarcaur/commons-compress/commit/ff9af724d668c67b2746775711cdaed123bd42bb]
>  is a minimal repro commit. I used the {{HelloWorld.java}} and added an empty 
> module like:
> {code:java}
> module reproModule {} {code}
> These are the contents of {{{}hw-module.jar{}}}.
> I ran in to this issue trying to move a large internal repo from 1.20 to 1.22 
> following our back and forth on COMPRESS-582.
> Just as a matter of exploration I also attempted to change the ASM level 
> [within 
> commons-compress|https://github.com/bmarcaur/commons-compress/commit/f782c0160e10e67d6c294f69a57e06c549c8c0ef],
>  but doing so caused a number of tests to fail locally due to the jars not 
> being identical after a packing round trip. I am not sure the SOP for 
> updating the comparison Jars or if this is indicative of an issue with the 
> change.
> To prove that increasing the ASM level allows us to pack jars with modules [I 
> extended our previous repro with a new test exemplifying a _very_ hacky 
> "fix".|https://github.com/AlexLandau/commons-compress-asm-error/blob/develop/src/test/java/com/github/alexlandau/commonscompressrepro/ReprosTest.java#L29]
> Let me know if there is any other context I can provide. Thank you!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (COMPRESS-634) Pack200 Fails to Pack Jars with Module Declarations

2023-01-24 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680210#comment-17680210
 ] 

Gary D. Gregory edited comment on COMPRESS-634 at 1/24/23 1:03 PM:
---

[~bmarcaur] 

I am guessing there is more to do than update test fixtures. Probably adding 
support so that when the module info is read, it can be written out again.

Feel free to create a PR on GitHub.

 


was (Author: garydgregory):
[~bmarcaur] 

I am guessing there is more to do than update test fixtures. Probably adding 
support so that when the module info is read, it can be written out again.

Feel free to create a PR on GitHub. This is likely to be not trivial.

 

> Pack200 Fails to Pack Jars with Module Declarations
> ---
>
> Key: COMPRESS-634
> URL: https://issues.apache.org/jira/browse/COMPRESS-634
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.21
>Reporter: Brandon Marc-Aurele
>Priority: Major
>
> Using commons-compress version 1.21+ (move to native pack algorithm) fails to 
> pack jars with module declarations. It fails with the following 
> error/stacktrace:
> {code:java}
> java.lang.UnsupportedOperationException: Module requires ASM6
>     at org.objectweb.asm.ClassVisitor.visitModule(ClassVisitor.java:153)
>     at 
> org.objectweb.asm.ClassReader.readModuleAttributes(ClassReader.java:781)
>     at org.objectweb.asm.ClassReader.accept(ClassReader.java:580)
>     at 
> org.apache.commons.compress.harmony.pack200.Segment.processClasses(Segment.java:160)
>     at 
> org.apache.commons.compress.harmony.pack200.Segment.pack(Segment.java:110)
>     at 
> org.apache.commons.compress.harmony.pack200.Archive.doNormalPack(Archive.java:128)
>     at 
> org.apache.commons.compress.harmony.pack200.Archive.pack(Archive.java:98) 
> {code}
> [Here|https://github.com/bmarcaur/commons-compress/commit/ff9af724d668c67b2746775711cdaed123bd42bb]
>  is a minimal repro commit. I used the {{HelloWorld.java}} and added an empty 
> module like:
> {code:java}
> module reproModule {} {code}
> These are the contents of {{{}hw-module.jar{}}}.
> I ran in to this issue trying to move a large internal repo from 1.20 to 1.22 
> following our back and forth on COMPRESS-582.
> Just as a matter of exploration I also attempted to change the ASM level 
> [within 
> commons-compress|https://github.com/bmarcaur/commons-compress/commit/f782c0160e10e67d6c294f69a57e06c549c8c0ef],
>  but doing so caused a number of tests to fail locally due to the jars not 
> being identical after a packing round trip. I am not sure the SOP for 
> updating the comparison Jars or if this is indicative of an issue with the 
> change.
> To prove that increasing the ASM level allows us to pack jars with modules [I 
> extended our previous repro with a new test exemplifying a _very_ hacky 
> "fix".|https://github.com/AlexLandau/commons-compress-asm-error/blob/develop/src/test/java/com/github/alexlandau/commonscompressrepro/ReprosTest.java#L29]
> Let me know if there is any other context I can provide. Thank you!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (CONFIGURATION-828) Configuration file truncated then there isn't enough disk space for writing

2023-01-25 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CONFIGURATION-828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed CONFIGURATION-828.
-
Resolution: Won't Do

Closing.

> Configuration file truncated then there isn't enough disk space for writing
> ---
>
> Key: CONFIGURATION-828
> URL: https://issues.apache.org/jira/browse/CONFIGURATION-828
> Project: Commons Configuration
>  Issue Type: Bug
>  Components: File reloading
>Affects Versions: 2.8.0
>Reporter: Alla Gofman
>Priority: Major
>
> Please consider:
> 1) Not to create output stream to file in case there is not enough disk space 
> for writing it's content in case of save.
> 2) Enable to register some FileHandlerListener in this case to be notified on 
> such issue and can react by throwing some Exception. 
> The FileHandlerListener.saving(FileHandler handler) method not suitable, 
> because called after the file being already truncated.
> * Meanwhile, the workaround is to extend 
> DefaultFileSystem.getOutputStream(File file) method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-831) No such file or directory

2023-01-25 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680796#comment-17680796
 ] 

Gary D. Gregory commented on VFS-831:
-

I'm not sure what you expect we can do without a reproducible use case or 
better yet, a unit test.

> No such file or directory
> -
>
> Key: VFS-831
> URL: https://issues.apache.org/jira/browse/VFS-831
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
> Environment: OpenJDK 14 FreeBSD 13.1-RELEASE
>Reporter: Hasan Diwan
>Priority: Blocker
>
> % ls -l /var/tmp/reminders.json.xz
> -rw-r--r--  1 ec2-user  wheel  32 Jan 25 19:28 /var/tmp/reminders.json.xz
> % hostname
> FileContent content = 
> manager.resolveFile("/var/tmp/reminders.json.xz").getContent();
> throws an "org.apache.commons.vfs2.FileNotFoundException: Could not read from 
> "file:///var/tmp/reminders.json.xz" because it is not a file."
> When I try accessing it with "sftp", it gives:
> org.apache.commons.vfs2.FileSystemException: Could not find file with URI 
> "sftp://ec2-u...@hasan.d8u.us//var/tmp/reminders.json.xz"; because it is a 
> relative path, and no base URI was provided.
> It gives the same if I eliminate the double slash in the URL as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-831) No such file or directory

2023-01-25 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17680839#comment-17680839
 ] 

Gary D. Gregory commented on VFS-831:
-

You create a PR on GitHub with a failing test, then you would not have to deal 
with editing whatever pom.xml you are now editing. 

> No such file or directory
> -
>
> Key: VFS-831
> URL: https://issues.apache.org/jira/browse/VFS-831
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
> Environment: OpenJDK 14 FreeBSD 13.1-RELEASE
>Reporter: Hasan Diwan
>Priority: Blocker
> Attachments: id_ecdsa.pub
>
>
> % ls -l /var/tmp/reminders.json.xz
> -rw-r--r--  1 ec2-user  wheel  32 Jan 25 19:28 /var/tmp/reminders.json.xz
> % hostname
> FileContent content = 
> manager.resolveFile("/var/tmp/reminders.json.xz").getContent();
> throws an "org.apache.commons.vfs2.FileNotFoundException: Could not read from 
> "file:///var/tmp/reminders.json.xz" because it is not a file."
> When I try accessing it with "sftp", it gives:
> org.apache.commons.vfs2.FileSystemException: Could not find file with URI 
> "sftp://ec2-u...@hasan.d8u.us//var/tmp/reminders.json.xz"; because it is a 
> relative path, and no base URI was provided.
> It gives the same if I eliminate the double slash in the URL as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CLI-317) NullPointerException thrown by CommandLineParser.parse()

2023-01-27 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CLI-317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681310#comment-17681310
 ] 

Gary D. Gregory commented on CLI-317:
-

Hello [~PhBastiani] 

Thank you for your report.

The most expedient way to move this forward and avoid confusion would be to 
create a standalone test method or class anyone can paste in the git master 
code base. To this effect, please attach a patch/diff or create a PR on GitHub 
for a failing test.

TY

> NullPointerException thrown by CommandLineParser.parse()
> 
>
> Key: CLI-317
> URL: https://issues.apache.org/jira/browse/CLI-317
> Project: Commons CLI
>  Issue Type: Bug
>  Components: Parser
>Affects Versions: 1.5
>Reporter: Philippe Bastiani
>Priority: Critical
>  Labels: exception
>
> # First at all, your *testAmbiguousLongWithoutEqualSingleDash* does not lead 
> to a AmbiguousOptionException
>  # For this same test, if I replace the deprecated OptionBuilder with 
> Option.Builder as follows
> {code:java}
> options.addOption(Option.builder().option("f").longOpt("foo").optionalArg(true).build());
> options.addOption(Option.builder().option("b").longOpt("bar").optionalArg(false).build());
> {code}
> the test leads to a NPE
> {code:java}
> java.lang.NullPointerException
>   at xx.DefaultParser.handleShortAndLongOption(DefaultParser.java:476)
>   at xx.DefaultParser.handleToken(DefaultParser.java:535)
>   at xx.DefaultParser.parse(DefaultParser.java:714)
>   at xx.DefaultParser.parse(DefaultParser.java:677)
>   at xx.DefaultParser.parse(DefaultParser.java:654)
> {code}
> _Note_  : tested with Github code (January 22, 2023)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (DAEMON-452) Should we create the target folder before apxSecurityGrantFileAccessToUser?

2023-01-27 Thread Gary D. Gregory (Jira)
Gary D. Gregory created DAEMON-452:
--

 Summary: Should we create the target folder before 
apxSecurityGrantFileAccessToUser?
 Key: DAEMON-452
 URL: https://issues.apache.org/jira/browse/DAEMON-452
 Project: Commons Daemon
  Issue Type: Improvement
  Components: prunsrv
Affects Versions: 1.3.3
 Environment: Windows
Reporter: Gary D. Gregory


Our API apxSecurityGrantFileAccessToUser will fail if the target folder does 
not exist.

Shouldn't we create this folder in advance of this call?

Names and paths obfuscated in this example:
{code}
[2023-01-27 13:55:39] [info]  ( prunsrv.c:2018) [ 7340] Apache Commons Daemon 
procrun (1.3.3.0 64-bit) started.
[2023-01-27 13:55:39] [debug] ( prunsrv.c:774 ) [ 7340] Installing service...
[2023-01-27 13:55:39] [info]  ( prunsrv.c:831 ) [ 7340] Installing service 
'ABC' name 'XYZ'.
[2023-01-27 13:55:39] [debug] ( prunsrv.c:860 ) [ 7340] Setting service 
description 'XYZ'.
[2023-01-27 13:55:39] [warn]  ( prunsrv.c:759 ) [ 7340] Failed to grant service 
user 'NT AUTHORITY\LocalService' write permissions to log path 
'C:\ProgramData\Example Company\Example Product\10.3991.0.0\Default\logs' due 
to error '2: The system cannot find the file specified.'
[2023-01-27 13:55:39] [info]  ( prunsrv.c:882 ) [ 7340] Service 'ABC' installed.
[2023-01-27 13:55:39] [info]  ( prunsrv.c:2102) [ 7340] Apache Commons Daemon 
procrun finished.
Service 'XYZ' was installed succesfully
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (DAEMON-453) Add support for wildcard classpath in java mode

2023-01-30 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17682178#comment-17682178
 ] 

Gary D. Gregory commented on DAEMON-453:


Hello [~thomasdewaelheyns] 

Thank you for your ticket. 

Feel free to provide a PR on GitHub ;)

> Add support for wildcard classpath in java mode
> ---
>
> Key: DAEMON-453
> URL: https://issues.apache.org/jira/browse/DAEMON-453
> Project: Commons Daemon
>  Issue Type: Improvement
>  Components: Procrun
>Affects Versions: 1.3.3
>Reporter: Thomas De Waelheyns
>Priority: Minor
>
> Classpaths with wildcards are currently supported only in jvm mode of the 
> launcher according to this [mail 
> list|https://lists.apache.org/thread/xz3v6p6xcw8hcp0rm1yt9gd7xg9oryvf]. The 
> jvm mode support was implemented in DAEMON-166.
> Since the code to expand the wildcard to actual references to jar files was 
> already written, wouldn't it make sense to expand this to also include java 
> mode?
>  
> Relevant commits:
> Windows: 
> [https://github.com/apache/commons-daemon/commit/6c0758fc052188dead563e4ce776a5da6e34acb9]
> Unix: 
> [https://github.com/apache/commons-daemon/commit/5997b1355ecc2fe0bcf3608e33195e5c2968931e]
>  
> For windows, in javajni.c, line 915-920 looks like a good place to call the 
> __apxEvalClasspath which expands any wildcard paths into a complete list.
> {noformat}
> if (szClassPath) {
>     p = (LPWSTR)apxPoolAlloc(hPool, (lstrlenW(JAVA_CLASSPATH_W) + 
> lstrlenW(szClassPath)) * sizeof(WCHAR));
>     lstrcpyW(p, JAVA_CLASSPATH_W);
>     lstrcatW(p, szClassPath);
>     (*lppArray)[i++] = p;
> }{noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (DBCP-524) Thread is blocked when i using dbcp 1.0 to connect mysql

2023-01-30 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed DBCP-524.

Resolution: Abandoned

> Thread is blocked when i using dbcp 1.0 to connect mysql
> 
>
> Key: DBCP-524
> URL: https://issues.apache.org/jira/browse/DBCP-524
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 1.0
>Reporter: guominzhi
>Priority: Blocker
>
> when I use dbcp 1.0 to connect to mysql, one thread is always blocked by 
> other thread.
> The log is blow:
> "schedulerFactoryBean_Worker-8" J9VMThread:0x04047300, 
> j9thread_t:0x7F66B7232200, java/lang/Thread:0x0007E90CBA48, state:R, 
> prio=5
> 3XMJAVALTHREAD (java/lang/Thread getId:0x8A, isDaemon:false)
> 3XMTHREADINFO1 (native thread ID:0x2452, native priority:0x5, native 
> policy:UNKNOWN)
> 3XMTHREADINFO2 (native stack address range from:0x7F66B6FE9000, 
> to:0x7F66B702A000, size:0x41000)
> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
> 3XMTHREADINFO3 Java callstack:
> 4XESTACKTRACE at java/net/SocketInputStream.socketRead0(Native Method)
> 4XESTACKTRACE at 
> java/net/SocketInputStream.read(SocketInputStream.java:140(Compiled Code))
> 4XESTACKTRACE at com/mysql/jdbc/MysqlIO.readFully(MysqlIO.java:1391(Compiled 
> Code))
> 4XESTACKTRACE at 
> com/mysql/jdbc/MysqlIO.reuseAndReadPacket(MysqlIO.java:1538(Compiled Code))
> 4XESTACKTRACE at 
> com/mysql/jdbc/MysqlIO.checkErrorPacket(MysqlIO.java:1929(Compiled Code))
> 4XESTACKTRACE at 
> com/mysql/jdbc/MysqlIO.sendCommand(MysqlIO.java:1167(Compiled Code))
> 4XESTACKTRACE at 
> com/mysql/jdbc/MysqlIO.sqlQueryDirect(MysqlIO.java:1278(Compiled Code))
> 4XESTACKTRACE at com/mysql/jdbc/MysqlIO.sqlQuery(MysqlIO.java:1224(Compiled 
> Code))
> 4XESTACKTRACE at 
> com/mysql/jdbc/Connection.execSQL(Connection.java:2248(Compiled Code))
> 5XESTACKTRACE (entered lock: java/lang/Object@0x0007EB01D990, entry 
> count: 2)
> 4XESTACKTRACE at 
> com/mysql/jdbc/Connection.execSQL(Connection.java:2196(Compiled Code))
> 4XESTACKTRACE at 
> com/mysql/jdbc/Statement.executeQuery(Statement.java:1163(Compiled Code))
> 5XESTACKTRACE (entered lock: java/lang/Object@0x0007EB01D990, entry 
> count: 1)
> 5XESTACKTRACE (entered lock: com/mysql/jdbc/Statement@0x0007EA1168C0, 
> entry count: 1)
> 4XESTACKTRACE at 
> org/apache/commons/dbcp/DelegatingStatement.executeQuery(DelegatingStatement.java:162(Compiled
>  Code))
> 4XESTACKTRACE at 
> org/apache/commons/dbcp/PoolableConnectionFactory.validateObject(PoolableConnectionFactory.java:221(Compiled
>  Code))
> 5XESTACKTRACE (entered lock: 
> org/apache/commons/dbcp/PoolableConnectionFactory@0x0007EB01D3F8, entry 
> count: 1)
> 4XESTACKTRACE at 
> org/apache/commons/pool/impl/GenericObjectPool.addObjectToPool(GenericObjectPool.java:1415(Compiled
>  Code))
> 4XESTACKTRACE at 
> org/apache/commons/pool/impl/GenericObjectPool.returnObject(GenericObjectPool.java:1381(Compiled
>  Code))
> 4XESTACKTRACE at 
> org/apache/commons/dbcp/AbandonedObjectPool.returnObject(AbandonedObjectPool.java:140(Compiled
>  Code))
> 5XESTACKTRACE (entered lock: 
> org/apache/commons/dbcp/AbandonedObjectPool@0x0007EB01D368, entry count: 
> 1)
> 4XESTACKTRACE at 
> org/apache/commons/dbcp/PoolableConnection.close(PoolableConnection.java:110(Compiled
>  Code))
> 4XESTACKTRACE at 
> com/sms/baseclasses/BaseJdbcDAO.query(BaseJdbcDAO.java:49(Compiled Code))
> 4XESTACKTRACE at 
> com/sms/service/GatewayJdbcService.QueryBySqlEx(GatewayJdbcService.java:124(Compiled
>  Code))
> 4XESTACKTRACE at 
> com/sms/gateway/SmsChanneMASlImpl.receiveRPTs(SmsChanneMASlImpl.java:114(Compiled
>  Code))
> 4XESTACKTRACE at 
> com/sms/service/SmsReceiveSentReportService.receiveReports(SmsReceiveSentReportService.java:54(Compiled
>  Code))
> 4XESTACKTRACE at 
> com/sms/service/SmsReceiveSentReportService$$FastClassByCGLIB$$5201ea4.invoke((Compiled
>  Code))
> 4XESTACKTRACE at 
> net/sf/cglib/proxy/MethodProxy.invoke(MethodProxy.java:149(Compiled Code))
> 4XESTACKTRACE at 
> org/springframework/aop/framework/Cglib2AopProxy$CglibMethodInvocation.invokeJoinpoint(Cglib2AopProxy.java:700(Compiled
>  Code))
> 4XESTACKTRACE at 
> org/springframework/aop/framework/ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149(Compiled
>  Code))
> 4XESTACKTRACE at 
> org/springframework/transaction/interceptor/TransactionInterceptor.invoke(TransactionInterceptor.java:106(Compiled
>  Code))
> 4XESTACKTRACE at 
> org/springframework/aop/framework/ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171(Compiled
>  Code))
> 4XESTACKTRACE at 
> org/springframework/aop/framework/Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:635(Compiled
>  Code))
> 4XESTACKTRACE at 
> com/sms/service/SmsReceiveSentReportServi

[jira] [Closed] (DBCP-561) NullPointerException occurs

2023-01-30 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed DBCP-561.

Resolution: Abandoned

> NullPointerException occurs
> ---
>
> Key: DBCP-561
> URL: https://issues.apache.org/jira/browse/DBCP-561
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 1.4
>Reporter: Yasufumi Hayashi
>Priority: Major
> Attachments: PoolingTest2.java
>
>
> The following error sometimes occurs with the attached program.
>  * java.lang.NullPointerExceptionjava.lang.NullPointerException
> at 
> org.apache.commons.dbcp.cpdsadapter.PooledConnectionImpl.prepareStatement(PooledConnectionImpl.java:243)
> at 
> org.apache.commons.dbcp.cpdsadapter.ConnectionImpl.prepareStatement(ConnectionImpl.java:95)
>  at PoolingThread2.selectSysdate(PoolingTest2.java:173)
> at PoolingThread2.run(PoolingTest2.java:147)
>  * java.sql.SQLRecoverableException: クローズされた接続です。
> at 
> oracle.jdbc.driver.PhysicalConnection.prepareStatementInternal(PhysicalConnection.java:1994)
> at 
> oracle.jdbc.driver.PhysicalConnection.prepareStatement(PhysicalConnection.java:1960)
>  at 
> oracle.jdbc.driver.PhysicalConnection.prepareStatement(PhysicalConnection.java:1866)
>  at 
> org.apache.commons.dbcp.cpdsadapter.PooledConnectionImpl.prepareStatement(PooledConnectionImpl.java:243)
>  
> at 
> org.apache.commons.dbcp.cpdsadapter.ConnectionImpl.prepareStatement(ConnectionImpl.java:95)
>  
> at PoolingThread2.selectSysdate(PoolingTest2.java:173) 
> at PoolingThread2.run(PoolingTest2.java:147)
>  
> Does not occur with dbcp 1.2.2, but occurs with dbcp 1.4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (DBCP-580) Not compatible with org.dbunit.IDatabaseTester

2023-01-30 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed DBCP-580.

Resolution: Abandoned

> Not compatible with org.dbunit.IDatabaseTester 
> ---
>
> Key: DBCP-580
> URL: https://issues.apache.org/jira/browse/DBCP-580
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.9.0
> Environment: 
> com.github.springtestdbunit
> spring-test-dbunit
> 1.3.0
> test
> 
>   
> org.dbunit
> dbunit
> 2.7.2
> test
> 
>Reporter: Arelowo
>Priority: Major
>
> We tried to upgrade to the latest version from 1.4 but it seems to be causing 
> an issue with our db unit test. 
>  
> We get the following printed over and over again when the onSetup() method is 
> invoked
> {code:java}
> at java.lang.StringBuilder.append(StringBuilder.java:131)at 
> java.lang.StringBuilder.append(StringBuilder.java:131) at 
> org.apache.commons.pool2.impl.GenericKeyedObjectPool.toStringAppendFields(GenericKeyedObjectPool.java:1593)
>  at org.apache.commons.pool2.BaseObject.toString(BaseObject.java:31) at 
> org.apache.commons.dbcp2.PoolingConnection.toString(PoolingConnection.java:584)
>  at java.lang.String.valueOf(String.java:2994) at 
> java.lang.StringBuilder.append(StringBuilder.java:131) at 
> org.apache.commons.pool2.impl.GenericKeyedObjectPool.toStringAppendFields(GenericKeyedObjectPool.java:1593)
>  at org.apache.commons.pool2.BaseObject.toString(BaseObject.java:31) at 
> org.apache.commons.dbcp2.PoolingConnection.toString(PoolingConnection.java:584)
>  at java.lang.String.valueOf(String.java:2994) at 
> java.lang.StringBuilder.append(StringBuilder.java:131) at 
> org.apache.commons.pool2.impl.GenericKeyedObjectPool.toStringAppendFields(GenericKeyedObjectPool.java:1593)
>  at org.apache.commons.pool2.BaseObject.toString(BaseObject.java:31) at 
> org.apache.commons.dbcp2.PoolingConnection.toString(PoolingConnection.java:584)
>  at java.lang.String.valueOf(String.java:2994) at 
> java.lang.StringBuilder.append(StringBuilder.java:131) at 
> org.apache.commons.pool2.impl.GenericKeyedObjectPool.toStringAppendFields(GenericKeyedObjectPool.java:1593)
>  at org.apache.commons.pool2.BaseObject.toString(BaseObject.java:31) at 
> org.apache.commons.dbcp2.PoolingConnection.toString(PoolingConnection.java:584)
>  at java.lang.String.valueOf(String.java:2994) at 
> java.lang.StringBuilder.append(StringBuilder.java:131) at 
> org.apache.commons.pool2.impl.GenericKeyedObjectPool.toStringAppendFields(GenericKeyedObjectPool.java:1593)
>  at org.apache.commons.pool2.BaseObject.toString(BaseObject.java:31) at 
> org.apache.commons.dbcp2.PoolingConnection.toString(PoolingConnection.java:584){code}
>  
> We tested different versions and it seems 2.1.1 does not have this bug.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (DBCP-489) Add ability to export/import messages from file in CLI

2023-01-30 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed DBCP-489.

Resolution: Abandoned

> Add ability to export/import messages from file in CLI
> --
>
> Key: DBCP-489
> URL: https://issues.apache.org/jira/browse/DBCP-489
> Project: Commons DBCP
>  Issue Type: New Feature
>Reporter: Martyn Taylor
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (DBCP-470) Create a new BasicDataSource, the database hasn't start, the DBCP's pool will be leaked

2023-01-30 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/DBCP-470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed DBCP-470.

Resolution: Information Provided

> Create a new BasicDataSource, the database hasn't start, the DBCP's pool will 
> be leaked
> ---
>
> Key: DBCP-470
> URL: https://issues.apache.org/jira/browse/DBCP-470
> Project: Commons DBCP
>  Issue Type: Bug
>Affects Versions: 1.4
> Environment: All environment
>Reporter: jeho0815
>Priority: Major
>  Labels: robustness, security
> Fix For: 1.4.1
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> First the database hasn't start, then create a new BasicDataSource.  The 
> method createDataSource judge dataSource == null, will create a new 
> dataSource. The first step is create a connectionPool, second create a 
> connectionFactory, but validateConnectionFactory will throw a 
> SQLNestedException, the dataSource will be null again. Next itme repeat the 
> steps again and again. The most import issue is create the connectionPool 
> will be refered by a java.lang.Timer, so it can't be collected by GC. If the 
> minIdle is positive, when database status is ok, it will create connection 
> also.
> As a word, the bug will cause memory leak and may cause connection leak.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (DAEMON-452) Should we create the target folder before apxSecurityGrantFileAccessToUser?

2023-01-31 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17682575#comment-17682575
 ] 

Gary D. Gregory commented on DAEMON-452:


All we need is a PR ;)

> Should we create the target folder before apxSecurityGrantFileAccessToUser?
> ---
>
> Key: DAEMON-452
> URL: https://issues.apache.org/jira/browse/DAEMON-452
> Project: Commons Daemon
>  Issue Type: Improvement
>  Components: prunsrv
>Affects Versions: 1.3.3
> Environment: Windows
>Reporter: Gary D. Gregory
>Priority: Major
>
> Our API apxSecurityGrantFileAccessToUser will fail if the target folder does 
> not exist.
> Shouldn't we create this folder in advance of this call?
> Names and paths obfuscated in this example:
> {code}
> [2023-01-27 13:55:39] [info]  ( prunsrv.c:2018) [ 7340] Apache Commons Daemon 
> procrun (1.3.3.0 64-bit) started.
> [2023-01-27 13:55:39] [debug] ( prunsrv.c:774 ) [ 7340] Installing service...
> [2023-01-27 13:55:39] [info]  ( prunsrv.c:831 ) [ 7340] Installing service 
> 'ABC' name 'XYZ'.
> [2023-01-27 13:55:39] [debug] ( prunsrv.c:860 ) [ 7340] Setting service 
> description 'XYZ'.
> [2023-01-27 13:55:39] [warn]  ( prunsrv.c:759 ) [ 7340] Failed to grant 
> service user 'NT AUTHORITY\LocalService' write permissions to log path 
> 'C:\ProgramData\Example Company\Example Product\10.3991.0.0\Default\logs' due 
> to error '2: The system cannot find the file specified.'
> [2023-01-27 13:55:39] [info]  ( prunsrv.c:882 ) [ 7340] Service 'ABC' 
> installed.
> [2023-01-27 13:55:39] [info]  ( prunsrv.c:2102) [ 7340] Apache Commons Daemon 
> procrun finished.
> Service 'XYZ' was installed succesfully
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IO-786) Unsynchronized BufferedInputStream

2023-02-01 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/IO-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683054#comment-17683054
 ] 

Gary D. Gregory commented on IO-786:


[~btellier] 

Please use git master and the new class {{UnsynchronizedBufferedInputStream}} 
or a snapshot from 
[https://repository.apache.org/content/repositories/snapshots/commons-io/commons-io/2.12.0-SNAPSHOT/]
 and tell us if that helps.

 

> Unsynchronized BufferedInputStream
> --
>
> Key: IO-786
> URL: https://issues.apache.org/jira/browse/IO-786
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Streams/Writers
> Environment: JRE 19, linux
>Reporter: Benoit Tellier
>Priority: Major
> Attachments: Screenshot from 2023-01-30 09-06-03.png
>
>
> As part of development of Apache James I had the unpleasant surprise to 
> notice that, on modern JVMs, the cost of synchronization skyrocketed (JRE 19).
> In one part of the code, we do need to find the exact location of email 
> header end, and for this needs to read an InputStream byte by byte. We of 
> course buffer the inputStream in order to limit potential blocking calls. 
> Profiling showed 73% of the reading time is spent synchronizing on the 
> BufferedInputStream. Thus I am keen on having an 
> UnsynchronizedBufferedInputStream & friends at hand. See attached screenshot.
>  => This is reported upstream see https://bugs.openjdk.org/browse/JDK-4097272 
> . This was disregarded in lower Java version but I hope it "could" be 
> reconsidered.
>  => While I can duplicate the class in Apache James source code and remove 
> synchronised keywords, this sounds a generic enough to fit in commons-io.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IO-786) Unsynchronized BufferedInputStream

2023-02-01 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved IO-786.

Fix Version/s: 2.12.0
   Resolution: Implemented

> Unsynchronized BufferedInputStream
> --
>
> Key: IO-786
> URL: https://issues.apache.org/jira/browse/IO-786
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Streams/Writers
> Environment: JRE 19, linux
>Reporter: Benoit Tellier
>Priority: Major
> Fix For: 2.12.0
>
> Attachments: Screenshot from 2023-01-30 09-06-03.png
>
>
> As part of development of Apache James I had the unpleasant surprise to 
> notice that, on modern JVMs, the cost of synchronization skyrocketed (JRE 19).
> In one part of the code, we do need to find the exact location of email 
> header end, and for this needs to read an InputStream byte by byte. We of 
> course buffer the inputStream in order to limit potential blocking calls. 
> Profiling showed 73% of the reading time is spent synchronizing on the 
> BufferedInputStream. Thus I am keen on having an 
> UnsynchronizedBufferedInputStream & friends at hand. See attached screenshot.
>  => This is reported upstream see https://bugs.openjdk.org/browse/JDK-4097272 
> . This was disregarded in lower Java version but I hope it "could" be 
> reconsidered.
>  => While I can duplicate the class in Apache James source code and remove 
> synchronised keywords, this sounds a generic enough to fit in commons-io.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (VALIDATOR-488) Javadoc error at CheckDigit

2023-02-03 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/VALIDATOR-488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved VALIDATOR-488.
---
Fix Version/s: 1.8
 Assignee: Gary D. Gregory
   Resolution: Fixed

Fix in git master; TY [~sadguten] 

> Javadoc error at CheckDigit
> ---
>
> Key: VALIDATOR-488
> URL: https://issues.apache.org/jira/browse/VALIDATOR-488
> Project: Commons Validator
>  Issue Type: Bug
>  Components: Framework
>Affects Versions: 1.7
>Reporter: Enrique
>Assignee: Gary D. Gregory
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 1.8
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> There is a javadoc error, missing '\{' at 
> org.apache.commons.validator.routines.checkdigit.CheckDigit:
> {noformat}
>     CheckDigit is used by the new generic @link CodeValidator} implementation.
> {noformat}
> should be:
> {noformat}
>     CheckDigit is used by the new generic {@link CodeValidator} 
> implementation.
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (VFS-832) Sftp channel not put back in doGetInputStream

2023-02-04 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved VFS-832.
-
Fix Version/s: 2.10.0
   Resolution: Fixed

[~wangerry] 

Thank you for your report and PR. Please verify git master.

> Sftp channel not put back in doGetInputStream
> -
>
> Key: VFS-832
> URL: https://issues.apache.org/jira/browse/VFS-832
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: wangerry
>Priority: Major
> Fix For: 2.10.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> *since VFS-210 added this in SftpFileObject.java:*
>  
> {code:java}
> InputStream is;
> try
> {
> // VFS-210: sftp allows to gather an input stream even from a directory 
> and will
> // fail on first read. So we need to check the type anyway
> if (!getType().hasContent())
> {
> throw new FileSystemException("vfs.provider/read-not-file.error", 
> getName());
> }
> is = channel.get(relPath);
> }
> catch (SftpException e)
> {
> if (e.id == ChannelSftp.SSH_FX_NO_SUCH_FILE)
> {
> throw new FileNotFoundException(getName());
> }
> throw new FileSystemException(e);
> }{code}
> when throw an exception(such as file not exists or get input stream from a 
> directory), the channel  is not called with putChannel() and not be closed. 
> when this happen several times (normally 10 times due to sshd default 
> MaxSession is 10), getChannel() will always throws exception(Channel is not 
> opened), because server will not response the open request.
>  
> *It can be reproduced in this way:*
>  
> {code:java}
> //Setup our SFTP configuration
> FileSystemOptions opts = new FileSystemOptions();
> SftpFileSystemConfigBuilder instance = 
> SftpFileSystemConfigBuilder.getInstance();
> instance.setStrictHostKeyChecking(opts, "no");
> instance.setUserDirIsRoot(opts, true);
> instance.setConnectTimeout(opts, Duration.ofSeconds(30));
> instance.setSessionTimeout(opts, Duration.ofSeconds(30));
> instance.setDisableDetectExecChannel(opts, true);
> for (int i = 0; i < 15; i++) {
>   try {
> try (FileObject fileObject = 
> VFS.getManager().resolveFile("sftp://f...@example.com/path_not_exists.txt";, 
> opts)) {
>   try (InputStream inputStream = 
> fileObject.getContent().getInputStream()) {
> // do something
>   }
> }
>   } catch (Exception e) {
> e.printStackTrace();
>   }
> } {code}
> first 10 times will be "Could not read from "xxx" because it is not a file."
>  
> then will be
>  
> {code:java}
> Caused by: com.jcraft.jsch.JSchException: channel is not opened.
>     at com.jcraft.jsch.Channel.sendChannelOpen(Channel.java:768)
>     at com.jcraft.jsch.Channel.connect(Channel.java:151)
>     at 
> org.apache.commons.vfs2.provider.sftp.SftpFileSystem.getChannel(SftpFileSystem.java:213)
>     ... 6 more {code}
>  
> I will commit a pr to fix this
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (BEANUTILS-559) Is there a plan to release a new version because the time before the previous version is too long?

2023-02-20 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/BEANUTILS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691165#comment-17691165
 ] 

Gary D. Gregory commented on BEANUTILS-559:
---

Please ask on the mailing list.

> Is there a plan to release a new version because the time before the previous 
> version is too long?
> --
>
> Key: BEANUTILS-559
> URL: https://issues.apache.org/jira/browse/BEANUTILS-559
> Project: Commons BeanUtils
>  Issue Type: Wish
>Reporter: Radar wen
>Priority: Minor
>
> Apache Commons BeanUtils 1.9.4 was released on 2019-08-15. It's been more 
> than three years since the community submitted so much code. Is there any 
> plan to release a new version?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (BEANUTILS-559) Is there a plan to release a new version because the time before the previous version is too long?

2023-02-20 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/BEANUTILS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory closed BEANUTILS-559.
-
Resolution: Fixed

Not a Jira.

> Is there a plan to release a new version because the time before the previous 
> version is too long?
> --
>
> Key: BEANUTILS-559
> URL: https://issues.apache.org/jira/browse/BEANUTILS-559
> Project: Commons BeanUtils
>  Issue Type: Wish
>Reporter: Radar wen
>Priority: Minor
>
> Apache Commons BeanUtils 1.9.4 was released on 2019-08-15. It's been more 
> than three years since the community submitted so much code. Is there any 
> plan to release a new version?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >