[jira] [Commented] (COMPRESS-243) Steal LZW codec from imaging and provide a Compressor for classic Unix compress
[ https://issues.apache.org/jira/browse/COMPRESS-243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13833486#comment-13833486 ] Stefan Bodewig commented on COMPRESS-243: - Thanks for checking > Steal LZW codec from imaging and provide a Compressor for classic Unix > compress > --- > > Key: COMPRESS-243 > URL: https://issues.apache.org/jira/browse/COMPRESS-243 > Project: Commons Compress > Issue Type: New Feature >Reporter: Stefan Bodewig >Priority: Minor > > https://svn.apache.org/repos/asf/commons/proper/imaging/trunk/src/main/java/org/apache/commons/imaging/common/mylzw/ -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (COMPRESS-243) Provide Compressor for classic Unix compress (maybe based on LZW codec from imaging)
[ https://issues.apache.org/jira/browse/COMPRESS-243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Bodewig updated COMPRESS-243: Summary: Provide Compressor for classic Unix compress (maybe based on LZW codec from imaging) (was: Steal LZW codec from imaging and provide a Compressor for classic Unix compress) > Provide Compressor for classic Unix compress (maybe based on LZW codec from > imaging) > > > Key: COMPRESS-243 > URL: https://issues.apache.org/jira/browse/COMPRESS-243 > Project: Commons Compress > Issue Type: New Feature >Reporter: Stefan Bodewig >Priority: Minor > > https://svn.apache.org/repos/asf/commons/proper/imaging/trunk/src/main/java/org/apache/commons/imaging/common/mylzw/ -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (COMPRESS-244) 7z reading of UINT64 data type is wrong for big values
[ https://issues.apache.org/jira/browse/COMPRESS-244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Bodewig resolved COMPRESS-244. - Resolution: Fixed Fix Version/s: 1.7 Thanks, I've taken a slightly different approach with svn revision 1545928 Could you please verify I didn't make some dumb mistake? > 7z reading of UINT64 data type is wrong for big values > -- > > Key: COMPRESS-244 > URL: https://issues.apache.org/jira/browse/COMPRESS-244 > Project: Commons Compress > Issue Type: Bug > Components: Archivers >Affects Versions: 1.6 >Reporter: Nico Kruber > Labels: easyfix, patch > Fix For: 1.7 > > Attachments: fix-readUint64-for-large-values.diff > > > h2. Brief description > large values with a first byte indicating at least 4 additional bytes shift > an integer by at least 32bits thus leading to an overflow and an incorrect > value - the value needs to be casted to long before the bitshift! > (see the attached patch) > h2. Details from the 7z documentation > {quote} > {noformat} > UINT64 means real UINT64 encoded with the following scheme: > Size of encoding sequence depends from first byte: > First_Byte Extra_BytesValue > (binary) > 0xxx : ( xxx ) > 10xxBYTE y[1] : ( xx << (8 * 1)) + y > 110xBYTE y[2] : ( x << (8 * 2)) + y > ... > 110xBYTE y[6] : ( x << (8 * 6)) + y > 1110BYTE y[7] : y > BYTE y[8] : y > {noformat} > {quote} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (DAEMON-310) jsvc fails on AIX 5.3
[ https://issues.apache.org/jira/browse/DAEMON-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13833298#comment-13833298 ] John Wehle commented on DAEMON-310: --- Okay ... I'm not quite sure why my libpath comment has extra line breaks. Hopefully you'll get the idea. > jsvc fails on AIX 5.3 > - > > Key: DAEMON-310 > URL: https://issues.apache.org/jira/browse/DAEMON-310 > Project: Commons Daemon > Issue Type: Bug > Components: Jsvc >Affects Versions: 1.0.15 > Environment: IBM AIX 5.3 powerpc > Java 6 >Reporter: John Wehle > Labels: patch > Attachments: jsvc-aix.txt > > Original Estimate: 1m > Remaining Estimate: 1m > > jsvc fails to find / start up the java virtual machine. > The attached patch resolves the issue. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (DAEMON-310) jsvc fails on AIX 5.3
[ https://issues.apache.org/jira/browse/DAEMON-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13833297#comment-13833297 ] John Wehle commented on DAEMON-310: --- In addition to the patch it is also necessary to do: LIBPATH=${JAVA_HOME}/jre/lib/ppc:${JAVA_HOME}/jre/lib/ppc/classic export LIBPATH before invoking jsvc. > jsvc fails on AIX 5.3 > - > > Key: DAEMON-310 > URL: https://issues.apache.org/jira/browse/DAEMON-310 > Project: Commons Daemon > Issue Type: Bug > Components: Jsvc >Affects Versions: 1.0.15 > Environment: IBM AIX 5.3 powerpc > Java 6 >Reporter: John Wehle > Labels: patch > Attachments: jsvc-aix.txt > > Original Estimate: 1m > Remaining Estimate: 1m > > jsvc fails to find / start up the java virtual machine. > The attached patch resolves the issue. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (DAEMON-310) jsvc fails on AIX 5.3
[ https://issues.apache.org/jira/browse/DAEMON-310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Wehle updated DAEMON-310: -- Attachment: jsvc-aix.txt Patch for issue. > jsvc fails on AIX 5.3 > - > > Key: DAEMON-310 > URL: https://issues.apache.org/jira/browse/DAEMON-310 > Project: Commons Daemon > Issue Type: Bug > Components: Jsvc >Affects Versions: 1.0.15 > Environment: IBM AIX 5.3 powerpc > Java 6 >Reporter: John Wehle > Labels: patch > Attachments: jsvc-aix.txt > > Original Estimate: 1m > Remaining Estimate: 1m > > jsvc fails to find / start up the java virtual machine. > The attached patch resolves the issue. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (DAEMON-310) jsvc fails on AIX 5.3
John Wehle created DAEMON-310: - Summary: jsvc fails on AIX 5.3 Key: DAEMON-310 URL: https://issues.apache.org/jira/browse/DAEMON-310 Project: Commons Daemon Issue Type: Bug Components: Jsvc Affects Versions: 1.0.15 Environment: IBM AIX 5.3 powerpc Java 6 Reporter: John Wehle jsvc fails to find / start up the java virtual machine. The attached patch resolves the issue. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (POOL-240) GKOP: invalidateObject does not unblock threads waiting in borrowObject
[ https://issues.apache.org/jira/browse/POOL-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Steitz resolved POOL-240. -- Resolution: Fixed > GKOP: invalidateObject does not unblock threads waiting in borrowObject > --- > > Key: POOL-240 > URL: https://issues.apache.org/jira/browse/POOL-240 > Project: Commons Pool > Issue Type: Bug >Affects Versions: 2.0 >Reporter: Dan McNulty > Fix For: 2.0.1 > > Attachments: InvalidateObjectTest.java > > > It appears that when threads are blocked in GKOP.borrowObject due to max > object limits being reached and another thread calls invalidateObject, the > threads blocked in GKOP.borrowObject are not woken up to attempt to create a > new object. > Have the semantics changed for invalidate in 2.0? > Attached is a unit test that demonstrates this issue. I should note that this > test passed against POOL 1.5, after making the appropriate changes due to the > API changes in 2.0. > After a cursory glance through the source for GenericObjectPool, it looks > like it might be affected by the same issue. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (VFS-350) Reading from an input stream to a .tar.gz ends up with a 'reading from an output buffer' exception
[ https://issues.apache.org/jira/browse/VFS-350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13833032#comment-13833032 ] Robbie Haertel commented on VFS-350: I'm encountering a similar issue in version 2.0 for a plain .tar file. I am concurrently reading multiple files from the same .tar file. Curiously, the same code works in some cases, and not others and I believe it is a function of the number of bytes read (fewer bytes complete successful, even in the face of concurrent reads from the same .tar file). Here is the stack trace: Caused by: java.io.IOException: reading from an output buffer at org.apache.commons.vfs2.provider.tar.TarBuffer.readRecord(TarBuffer.java:211) at org.apache.commons.vfs2.provider.tar.TarInputStream.read(TarInputStream.java:384) at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at org.apache.commons.vfs2.util.MonitorInputStream.read(MonitorInputStream.java:100) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) at java.io.InputStreamReader.read(InputStreamReader.java:184) at com.google.common.io.LineReader.readLine(LineReader.java:76) at edu.byu.nlp.io.LineReaderIterator.readLineQuietly(LineReaderIterator.java:31) ... 19 more > Reading from an input stream to a .tar.gz ends up with a 'reading from an > output buffer' exception > -- > > Key: VFS-350 > URL: https://issues.apache.org/jira/browse/VFS-350 > Project: Commons VFS > Issue Type: Bug >Affects Versions: 1.0 >Reporter: Benson Margulies > > I can turn this into a test case if needed. > A source snippet: > {noformat} > FileObject annotations = root.getChild("annotations"); > FileContent annotationsContent = annotations.getContent(); > InputStream input = annotationsContent.getInputStream(); > InputStreamReader isr = new InputStreamReader(input, > Charset.forName("utf-8")); > BufferedReader lineReader = new BufferedReader(isr); > String line; > while ((line = lineReader.readLine()) != null) { > } > java.io.IOException: reading from an output buffer > at > org.apache.commons.vfs.provider.tar.TarBuffer.readRecord(TarBuffer.java:213) > at > org.apache.commons.vfs.provider.tar.TarInputStream.read(TarInputStream.java:386) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) > at java.io.BufferedInputStream.read(BufferedInputStream.java:317) > at > org.apache.commons.vfs.util.MonitorInputStream.read(MonitorInputStream.java:74) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) > at java.io.InputStreamReader.read(InputStreamReader.java:167) > at java.io.BufferedReader.fill(BufferedReader.java:136) > at java.io.BufferedReader.readLine(BufferedReader.java:299) > at java.io.BufferedReader.readLine(BufferedReader.java:362) > at com.basistech.lsh.utils.TdtTestData.(TdtTestData.java:130) > at > com.basistech.lsh.utils.TdtTestDataTest.testTestDataReader(TdtTestDataTest.java:36) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) > at > org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) > at org
[jira] [Commented] (COMPRESS-243) Steal LZW codec from imaging and provide a Compressor for classic Unix compress
[ https://issues.apache.org/jira/browse/COMPRESS-243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13833029#comment-13833029 ] Damjan Jovanovic commented on COMPRESS-243: --- Even after adding the 3 byte .Z header, the bitstream written by commons-imaging is unreadable by "uncompress", and even 1 byte files compress differently. There is very little documentation about .Z files, so it's hard to tell which precise compression technique they use. > Steal LZW codec from imaging and provide a Compressor for classic Unix > compress > --- > > Key: COMPRESS-243 > URL: https://issues.apache.org/jira/browse/COMPRESS-243 > Project: Commons Compress > Issue Type: New Feature >Reporter: Stefan Bodewig >Priority: Minor > > https://svn.apache.org/repos/asf/commons/proper/imaging/trunk/src/main/java/org/apache/commons/imaging/common/mylzw/ -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Closed] (LOGGING-155) Please delete old releases from mirroring system
[ https://issues.apache.org/jira/browse/LOGGING-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sebb closed LOGGING-155. > Please delete old releases from mirroring system > > > Key: LOGGING-155 > URL: https://issues.apache.org/jira/browse/LOGGING-155 > Project: Commons Logging > Issue Type: Bug > Environment: http://www.apache.org/dist/logging/log4j/ > http://www.apache.org/dist/logging/log4net/binaries/ > http://www.apache.org/dist/logging/log4php/ >Reporter: Sebb > > To reduce the load on the ASF mirrors, projects are required to delete old > releases [1] > Please can you remove all non-current releases? > Thanks! > [Note that older releases are always available from the ASF archive server] > [1] http://www.apache.org/dev/release.html#when-to-archive -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (LOGGING-155) Please delete old releases from mirroring system
[ https://issues.apache.org/jira/browse/LOGGING-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sebb resolved LOGGING-155. -- Resolution: Invalid Oops! See: LOG4J2-456 LOG4NET-410 LOG4PHP-212 > Please delete old releases from mirroring system > > > Key: LOGGING-155 > URL: https://issues.apache.org/jira/browse/LOGGING-155 > Project: Commons Logging > Issue Type: Bug > Environment: http://www.apache.org/dist/logging/log4j/ > http://www.apache.org/dist/logging/log4net/binaries/ > http://www.apache.org/dist/logging/log4php/ >Reporter: Sebb > > To reduce the load on the ASF mirrors, projects are required to delete old > releases [1] > Please can you remove all non-current releases? > Thanks! > [Note that older releases are always available from the ASF archive server] > [1] http://www.apache.org/dev/release.html#when-to-archive -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (LOGGING-155) Please delete old releases from mirroring system
[ https://issues.apache.org/jira/browse/LOGGING-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832945#comment-13832945 ] Thomas Neidhart commented on LOGGING-155: - wrong project ;-) There seems to be no Apache Logging TLP on jira, otherwise I would have moved it already. Maybe move it to Log4J2? > Please delete old releases from mirroring system > > > Key: LOGGING-155 > URL: https://issues.apache.org/jira/browse/LOGGING-155 > Project: Commons Logging > Issue Type: Bug > Environment: http://www.apache.org/dist/logging/log4j/ > http://www.apache.org/dist/logging/log4net/binaries/ > http://www.apache.org/dist/logging/log4php/ >Reporter: Sebb > > To reduce the load on the ASF mirrors, projects are required to delete old > releases [1] > Please can you remove all non-current releases? > Thanks! > [Note that older releases are always available from the ASF archive server] > [1] http://www.apache.org/dev/release.html#when-to-archive -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (LOGGING-155) Please delete old releases from mirroring system
Sebb created LOGGING-155: Summary: Please delete old releases from mirroring system Key: LOGGING-155 URL: https://issues.apache.org/jira/browse/LOGGING-155 Project: Commons Logging Issue Type: Bug Environment: http://www.apache.org/dist/logging/log4j/ http://www.apache.org/dist/logging/log4net/binaries/ http://www.apache.org/dist/logging/log4php/ Reporter: Sebb To reduce the load on the ASF mirrors, projects are required to delete old releases [1] Please can you remove all non-current releases? Thanks! [Note that older releases are always available from the ASF archive server] [1] http://www.apache.org/dev/release.html#when-to-archive -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (POOL-240) GKOP: invalidateObject does not unblock threads waiting in borrowObject
[ https://issues.apache.org/jira/browse/POOL-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832720#comment-13832720 ] Phil Steitz commented on POOL-240: -- GKOP fix commited in r1545705. > GKOP: invalidateObject does not unblock threads waiting in borrowObject > --- > > Key: POOL-240 > URL: https://issues.apache.org/jira/browse/POOL-240 > Project: Commons Pool > Issue Type: Bug >Affects Versions: 2.0 >Reporter: Dan McNulty > Fix For: 2.0.1 > > Attachments: InvalidateObjectTest.java > > > It appears that when threads are blocked in GKOP.borrowObject due to max > object limits being reached and another thread calls invalidateObject, the > threads blocked in GKOP.borrowObject are not woken up to attempt to create a > new object. > Have the semantics changed for invalidate in 2.0? > Attached is a unit test that demonstrates this issue. I should note that this > test passed against POOL 1.5, after making the appropriate changes due to the > API changes in 2.0. > After a cursory glance through the source for GenericObjectPool, it looks > like it might be affected by the same issue. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (DBCP-363) dbcp bundle should use DynamicImport
[ https://issues.apache.org/jira/browse/DBCP-363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832623#comment-13832623 ] Balazs Zsoldos commented on DBCP-363: - [~jukkaz]: It takes about 10 min to write a DataSourceFactory for any JDBC driver. DynamicImport is one of the biggest mistake inside OSGi a developer can use. Instead, please make it possible to create a BasicDataSource based on another DataSource like at https://github.com/everit-org/commons-dbcp-component/blob/master/src/main/java/org/everit/osgi/jdbc/commons/dbcp/BasicSimpleDataSource.java > dbcp bundle should use DynamicImport > > > Key: DBCP-363 > URL: https://issues.apache.org/jira/browse/DBCP-363 > Project: Commons Dbcp > Issue Type: Bug >Affects Versions: 1.3, 1.4 >Reporter: Felix Mayerhuber > Fix For: 1.4.1 > > > The bundle provided in the maven central of the commons.dbcp doesn't have a > DynamicImport defined. This resolves in following error: > If you want to use a BasicDataSource class as dataSource and the class is > provided by the osgi environment (equinox, ...) the dataSource is not able to > be created due to a ClassNotFoundException. If the bundle would have set > DynamicImport it works. (I had to change from your bundle to the commons.dbcp > bundle provided by servicemix, because there the DynamicImport is set) -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Comment Edited] (DBCP-358) Equals implementations in DelegatingXxx classes are not symmetric
[ https://issues.apache.org/jira/browse/DBCP-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832463#comment-13832463 ] Balazs Zsoldos edited comment on DBCP-358 at 11/26/13 10:13 AM: Would be nice to fix this issue in a way that equals and hashcode does not use innermost. A short example why a programmer feels the current behavior is a bug:: {code:java} Connection conn1 = dataSource.getConnection(); conn1.close(); Connection conn2 = dataSource.getConnection(); if (conn2.equals(conn1) && conn1.isClosed() != conn2.isClosed()) { // Oops they are equal but one of them is closed but the other is not } {code} was (Author: balazs.zsoldos): Would be nice to fix this issue in a way that equals and hashcode does not use innermost. A short example why a programmer feels the current behavior is a bug:: Connection conn1 = dataSource.getConnection(); conn1.close(); Connection conn2 = dataSource.getConnection(); if (conn2.equals(conn1) && conn1.isClosed() != conn2.isClosed()) { // Oops they are equal but one of them is closed but the other is not } > Equals implementations in DelegatingXxx classes are not symmetric > - > > Key: DBCP-358 > URL: https://issues.apache.org/jira/browse/DBCP-358 > Project: Commons Dbcp > Issue Type: Bug >Affects Versions: 1.2, 1.2.2, 1.3, 1.4 >Reporter: Phil Steitz > Fix For: 1.3.1, 1.4.1 > > > For reasons unclear to me, DelegatingConnection, DelegatingStatement, > PoolGuardConnectionWrappers and other DBCP classes implement equals so that > the wrapping class is considered equal to its innermost delegate JDBC object. > This makes equals asymmetric when applied to a wrapper and its wrapped JDBC > object - wrapper.equals(delegate) returns true, but delegate.equals(wrapper) > will in general return false. > I am pretty sure that DBCP itself does not rely on this bugged behavior, so I > am inclined to fix it, making equals an equivalence relation on wrapper > instances, with two considered equal iff their innermost delegates are equal. > I can't imagine use cases where the bugged behavior is required. Can anyone > else? -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Comment Edited] (DBCP-358) Equals implementations in DelegatingXxx classes are not symmetric
[ https://issues.apache.org/jira/browse/DBCP-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832463#comment-13832463 ] Balazs Zsoldos edited comment on DBCP-358 at 11/26/13 10:13 AM: Would be nice to fix this issue in a way that equals and hashcode does not use innermost. A short example why a programmer feels the current behavior is a bug:: {code:java} Connection conn1 = dataSource.getConnection(); conn1.close(); Connection conn2 = dataSource.getConnection(); if (conn2.equals(conn1) && conn1.isClosed() != conn2.isClosed()) { // Oops they are equal but one of them is closed but the other is not } {code} was (Author: balazs.zsoldos): Would be nice to fix this issue in a way that equals and hashcode does not use innermost. A short example why a programmer feels the current behavior is a bug:: {code:java} Connection conn1 = dataSource.getConnection(); conn1.close(); Connection conn2 = dataSource.getConnection(); if (conn2.equals(conn1) && conn1.isClosed() != conn2.isClosed()) { // Oops they are equal but one of them is closed but the other is not } {code} > Equals implementations in DelegatingXxx classes are not symmetric > - > > Key: DBCP-358 > URL: https://issues.apache.org/jira/browse/DBCP-358 > Project: Commons Dbcp > Issue Type: Bug >Affects Versions: 1.2, 1.2.2, 1.3, 1.4 >Reporter: Phil Steitz > Fix For: 1.3.1, 1.4.1 > > > For reasons unclear to me, DelegatingConnection, DelegatingStatement, > PoolGuardConnectionWrappers and other DBCP classes implement equals so that > the wrapping class is considered equal to its innermost delegate JDBC object. > This makes equals asymmetric when applied to a wrapper and its wrapped JDBC > object - wrapper.equals(delegate) returns true, but delegate.equals(wrapper) > will in general return false. > I am pretty sure that DBCP itself does not rely on this bugged behavior, so I > am inclined to fix it, making equals an equivalence relation on wrapper > instances, with two considered equal iff their innermost delegates are equal. > I can't imagine use cases where the bugged behavior is required. Can anyone > else? -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (DBCP-358) Equals implementations in DelegatingXxx classes are not symmetric
[ https://issues.apache.org/jira/browse/DBCP-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832463#comment-13832463 ] Balazs Zsoldos commented on DBCP-358: - Would be nice to fix this issue in a way that equals and hashcode does not use innermost. A short example why a programmer feels the current behavior is a bug:: Connection conn1 = dataSource.getConnection(); conn1.close(); Connection conn2 = dataSource.getConnection(); if (conn2.equals(conn1) && conn1.isClosed() != conn2.isClosed()) { // Oops they are equal but one of them is closed but the other is not } > Equals implementations in DelegatingXxx classes are not symmetric > - > > Key: DBCP-358 > URL: https://issues.apache.org/jira/browse/DBCP-358 > Project: Commons Dbcp > Issue Type: Bug >Affects Versions: 1.2, 1.2.2, 1.3, 1.4 >Reporter: Phil Steitz > Fix For: 1.3.1, 1.4.1 > > > For reasons unclear to me, DelegatingConnection, DelegatingStatement, > PoolGuardConnectionWrappers and other DBCP classes implement equals so that > the wrapping class is considered equal to its innermost delegate JDBC object. > This makes equals asymmetric when applied to a wrapper and its wrapped JDBC > object - wrapper.equals(delegate) returns true, but delegate.equals(wrapper) > will in general return false. > I am pretty sure that DBCP itself does not rely on this bugged behavior, so I > am inclined to fix it, making equals an equivalence relation on wrapper > instances, with two considered equal iff their innermost delegates are equal. > I can't imagine use cases where the bugged behavior is required. Can anyone > else? -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (BEANUTILS-454) copyProperties() throws conversion exception for null Date
[ https://issues.apache.org/jira/browse/BEANUTILS-454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832433#comment-13832433 ] Timm Frenzel commented on BEANUTILS-454: I agree to Markus, that the behaviour of this method should not be changed suddenly. If the previous behaviour was not ok, the new behaviour should either be applied in a new method or explicitly activated somehow , since only new code probably expects this new behaviour. Also I can't see why there should be a different behaviour for different types. Either it should always silently copy *null* values or always throw an exception. In the latter case it should be built in in a way that don't hurts existing code. > copyProperties() throws conversion exception for null Date > -- > > Key: BEANUTILS-454 > URL: https://issues.apache.org/jira/browse/BEANUTILS-454 > Project: Commons BeanUtils > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Markus Stahl >Priority: Critical > Fix For: 1.8.4 > > > This issue had been reported earlier and rejected too soon. > Since 1.8.0, BeanUtils.copyProperties suddenly throws an Exception, if the > property is of type Date. It did not do so in prior releases, that's why > properly running software is nowadays broken. There is a workaround if the > BeanUtils are used in my own code, but if it is used in 3rd party code, I am > screwed. > Please, treat null for Date the same as null for any other type and copy null > from source to destination. > For more reasons, see the comments of people who move now to new releases of > BeanUtils facing the same problem. The issue gets more and more attention, > but I think, nobody except the reporters are notified about it. Therefore > this issue. -- This message was sent by Atlassian JIRA (v6.1#6144)