[jira] [Updated] (COMPRESS-215) ZipFile reads up to 64KiB in a sequence of one byte reads
[ https://issues.apache.org/jira/browse/COMPRESS-215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robin Power updated COMPRESS-215: - Attachment: COMPRESS-215.patch This proposed patch works by searching for the End Of Central Directory record first which should always be found close to the end of the file. From here it uses the fact that the ZIP64 EOCDL will be a fixed size and be immediately in front of the EOCD in the file to find it. If it is not there it is not ZIP64 and there is no reading further back to look for it. This performance improvement hinges on the knowledge that the ZIP64 EOCDL is consistently positioned relative to the EOCD. If this is not the case in some implementations please let me know :) > ZipFile reads up to 64KiB in a sequence of one byte reads > - > > Key: COMPRESS-215 > URL: https://issues.apache.org/jira/browse/COMPRESS-215 > Project: Commons Compress > Issue Type: Improvement > Components: Archivers >Affects Versions: 1.4.1 > Environment: JDK 1.7 64-bit, Windows 7 >Reporter: Robin Power >Priority: Minor > Attachments: COMPRESS-215.patch > > > This relates to a performance improvement. > When ZipFile is parsing a file it searches for the ZIP64 End Of Central > Directory Locator as a first step to determining if the file is ZIP64 or > regular 32 bit. It searches in reverse for the ZIP64 EOCDL signature from the > end of the file reading one byte at a time, potentially up to about 64KiB. > This can be an expensive operation, especially if it is not a ZIP64 file, as > it will make over 64000 file reads to determine that the file is not ZIP64. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (DAEMON-274) procrun ignores shutdown
[ https://issues.apache.org/jira/browse/DAEMON-274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554796#comment-13554796 ] Mladen Turk commented on DAEMON-274: 1.0.12 is in the release process. You can check the release candidates at http://people.apache.org/~mturk/daemon-1.0.12/ Once voted (hopefully in few days) it'll be available for GA. > procrun ignores shutdown > > > Key: DAEMON-274 > URL: https://issues.apache.org/jira/browse/DAEMON-274 > Project: Commons Daemon > Issue Type: Bug > Components: Procrun >Affects Versions: 1.0.10 > Environment: Windows OS >Reporter: Hsehdar >Assignee: Mladen Turk > Labels: procrun > Fix For: 1.0.12 > > > Procrun does not gracefully shut down when OS shut down occurs. Operating > system kills the service started. Service started using procrun is with state > IGNORES_SHUTDOWN. > What was expected? > Procrun service registered to be ACCEPTS_SHUTDOWN such that procrun shuts > down gracefully within the time allocated by OS. > It would be great if this becomes command line parameter. It will be a bonus > :) > Author requests assignee to change this issue details. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (COMPRESS-215) ZipFile reads up to 64KiB in a sequence of one byte reads
Robin Power created COMPRESS-215: Summary: ZipFile reads up to 64KiB in a sequence of one byte reads Key: COMPRESS-215 URL: https://issues.apache.org/jira/browse/COMPRESS-215 Project: Commons Compress Issue Type: Improvement Components: Archivers Affects Versions: 1.4.1 Environment: JDK 1.7 64-bit, Windows 7 Reporter: Robin Power Priority: Minor This relates to a performance improvement. When ZipFile is parsing a file it searches for the ZIP64 End Of Central Directory Locator as a first step to determining if the file is ZIP64 or regular 32 bit. It searches in reverse for the ZIP64 EOCDL signature from the end of the file reading one byte at a time, potentially up to about 64KiB. This can be an expensive operation, especially if it is not a ZIP64 file, as it will make over 64000 file reads to determine that the file is not ZIP64. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (DAEMON-274) procrun ignores shutdown
[ https://issues.apache.org/jira/browse/DAEMON-274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554644#comment-13554644 ] Hsehdar commented on DAEMON-274: Thanks [~mt...@apache.org] for the quick fix. Could not find information on PROCRUN releases. Can you kindly share about release window/time frame and next release date? This will help [~kernwig] and [~hsehdar] to use the feature. > procrun ignores shutdown > > > Key: DAEMON-274 > URL: https://issues.apache.org/jira/browse/DAEMON-274 > Project: Commons Daemon > Issue Type: Bug > Components: Procrun >Affects Versions: 1.0.10 > Environment: Windows OS >Reporter: Hsehdar >Assignee: Mladen Turk > Labels: procrun > Fix For: 1.0.12 > > > Procrun does not gracefully shut down when OS shut down occurs. Operating > system kills the service started. Service started using procrun is with state > IGNORES_SHUTDOWN. > What was expected? > Procrun service registered to be ACCEPTS_SHUTDOWN such that procrun shuts > down gracefully within the time allocated by OS. > It would be great if this becomes command line parameter. It will be a bonus > :) > Author requests assignee to change this issue details. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (VFS-285) AbstractFileObject.getChildren(): internal structures will be left inconsistent if the excepion is thrown
[ https://issues.apache.org/jira/browse/VFS-285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554015#comment-13554015 ] Gary Gregory commented on VFS-285: -- Thomas, feel free to commit. If you do, please update the changes.xml and this issue. Thank you, Gary > AbstractFileObject.getChildren(): internal structures will be left > inconsistent if the excepion is thrown > - > > Key: VFS-285 > URL: https://issues.apache.org/jira/browse/VFS-285 > Project: Commons VFS > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Kirill Safonov >Priority: Critical > Attachments: VFS-285.patch > > > AbstractFileObject.getChildren() creates *children* array and then fills it > by resolving child names via FileSystemManager.resolveName(). If the latter > method throws an exception (in my case it's "Invalid descendent file name > "pci-:00:07.1-scsi-0:0:0:0""), children array is left as is with some of > the entries = null, that inevitably results in NPE on the next getChildren() > call: > at > org.apache.commons.vfs.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:319) > at > org.apache.commons.vfs.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:314) > at > org.apache.commons.vfs.provider.AbstractFileObject.resolveFile(AbstractFileObject.java:723) > at > org.apache.commons.vfs.provider.AbstractFileObject.resolveFiles(AbstractFileObject.java:715) > at > org.apache.commons.vfs.provider.AbstractFileObject.getChildren(AbstractFileObject.java:618) > at > org.apache.commons.vfs.provider.ftp.FtpFileObject.getChildren(FtpFileObject.java:412) > since AbstractFileObject.getChildren() only checks that *children* instance > is not null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (VFS-285) AbstractFileObject.getChildren(): internal structures will be left inconsistent if the excepion is thrown
[ https://issues.apache.org/jira/browse/VFS-285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554016#comment-13554016 ] Gary Gregory commented on VFS-285: -- Thomas, feel free to commit. If you do, please update the changes.xml and this issue. Thank you, Gary > AbstractFileObject.getChildren(): internal structures will be left > inconsistent if the excepion is thrown > - > > Key: VFS-285 > URL: https://issues.apache.org/jira/browse/VFS-285 > Project: Commons VFS > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Kirill Safonov >Priority: Critical > Attachments: VFS-285.patch > > > AbstractFileObject.getChildren() creates *children* array and then fills it > by resolving child names via FileSystemManager.resolveName(). If the latter > method throws an exception (in my case it's "Invalid descendent file name > "pci-:00:07.1-scsi-0:0:0:0""), children array is left as is with some of > the entries = null, that inevitably results in NPE on the next getChildren() > call: > at > org.apache.commons.vfs.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:319) > at > org.apache.commons.vfs.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:314) > at > org.apache.commons.vfs.provider.AbstractFileObject.resolveFile(AbstractFileObject.java:723) > at > org.apache.commons.vfs.provider.AbstractFileObject.resolveFiles(AbstractFileObject.java:715) > at > org.apache.commons.vfs.provider.AbstractFileObject.getChildren(AbstractFileObject.java:618) > at > org.apache.commons.vfs.provider.ftp.FtpFileObject.getChildren(FtpFileObject.java:412) > since AbstractFileObject.getChildren() only checks that *children* instance > is not null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (VFS-209) RAM file system writes to a RAM file fail under heavy load
[ https://issues.apache.org/jira/browse/VFS-209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553977#comment-13553977 ] Thomas Neidhart commented on VFS-209: - Did you also call FileContent.close() after the write operation has finished? Closing the OutputStream alone is not sufficient, and subsequent calls to FileContent.getOutputStream() will fail with the mentioned exception. > RAM file system writes to a RAM file fail under heavy load > -- > > Key: VFS-209 > URL: https://issues.apache.org/jira/browse/VFS-209 > Project: Commons VFS > Issue Type: Bug >Affects Versions: 1.0 > Environment: Fails on all platforms >Reporter: Rakshith N >Priority: Critical > > Simple writes to a RAM file using the Microsoft Web Stress tool fails by > throwing a FileSystemException which says that the file is already in use. > I have double checked the streams. In happens, in spite of close being called > each time on the outputstream. > org.apache.commons.vfs.FileSystemException: Could > not write to ""ram:///dynamic.cfm"" because it is currently in use. > It fails while trying to get the OutputStream from the FileContent. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (VFS-396) RAM FileSystem allows the file system size to exceed the max size limit.
[ https://issues.apache.org/jira/browse/VFS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553887#comment-13553887 ] Thomas Neidhart commented on VFS-396: - I did a small test myself but I can not confirm the observation: {noformat} FileSystemOptions largeSized = new FileSystemOptions(); public void setup() { RamFileSystemConfigBuilder.getInstance().setMaxSize(largeSized, 500); } ... @Test public void testChunkFileWrite() throws Exception { // Default FS final FileObject fo1 = manager.resolveFile("ram:/fo1", largeSized); fo1.createFile(); try { final OutputStream os = fo1.getContent().getOutputStream(); // write file in chunks of 8kbytes (total 10.000.000 bytes) for (int i = 0; i < 1220; i++) { os.write(new byte[8192]); } os.close(); fail("It shouldn't save such a big file"); } catch (final FileSystemException e) { // Expected } } {noformat} The test successfully detects that the file will be too large, see also the debug output of the check in RamFileObject#resize(): {noformat} afs.size()=0 newSize=8192 this.size()=0 afs.size()=8192 newSize=16384 this.size()=8192 afs.size()=16384 newSize=24576 this.size()=16384 afs.size()=24576 newSize=32768 this.size()=24576 afs.size()=32768 newSize=40960 this.size()=32768 afs.size()=40960 newSize=49152 this.size()=40960 ... {noformat} > RAM FileSystem allows the file system size to exceed the max size limit. > > > Key: VFS-396 > URL: https://issues.apache.org/jira/browse/VFS-396 > Project: Commons VFS > Issue Type: Bug >Affects Versions: 2.0 > Environment: All >Reporter: Rupesh Kumar > Original Estimate: 0.5h > Remaining Estimate: 0.5h > > When a new file is created in the RAM file system, and content is written to > its outputstream, there is a check in place for ensuring that file system > size does not exceed the max limit set. But that check is wrong. > In RamFileOutputStream.write(), you calculate the size, newsize and call > file.resize(newSize) > And in the RamFileObject.resize(), there is a check > if (fs.size() + newSize - this.size() > maxSize) > { > throw new IOException("FileSystem capacity (" + maxSize > + ") exceeded."); > } > This check is wrong. > Consider this case of a new file system where the file system size is set to > 5 MB and I am trying to create a file of 10 MB in the RAM file system. the > file is being written in the chunk of 8 kb. For every resize check, fs.size() > would be 0 and (newsize - this.size()) would be 8 kb and therefore the check > never passes. > It could have been correct if the "old size" was locked down to the size > that was registered with the file system but the old size (this.size()) keeps > changing at every write. Thus the difference in newSize and this.size() would > always be the chunk size (typically 8 kb) and therefore no exception would be > thrown ever. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (VFS-285) AbstractFileObject.getChildren(): internal structures will be left inconsistent if the excepion is thrown
[ https://issues.apache.org/jira/browse/VFS-285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Neidhart updated VFS-285: Attachment: VFS-285.patch The attached patch cleans up the children cache if any exception occurs and re-throws it. > AbstractFileObject.getChildren(): internal structures will be left > inconsistent if the excepion is thrown > - > > Key: VFS-285 > URL: https://issues.apache.org/jira/browse/VFS-285 > Project: Commons VFS > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Kirill Safonov >Priority: Critical > Attachments: VFS-285.patch > > > AbstractFileObject.getChildren() creates *children* array and then fills it > by resolving child names via FileSystemManager.resolveName(). If the latter > method throws an exception (in my case it's "Invalid descendent file name > "pci-:00:07.1-scsi-0:0:0:0""), children array is left as is with some of > the entries = null, that inevitably results in NPE on the next getChildren() > call: > at > org.apache.commons.vfs.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:319) > at > org.apache.commons.vfs.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:314) > at > org.apache.commons.vfs.provider.AbstractFileObject.resolveFile(AbstractFileObject.java:723) > at > org.apache.commons.vfs.provider.AbstractFileObject.resolveFiles(AbstractFileObject.java:715) > at > org.apache.commons.vfs.provider.AbstractFileObject.getChildren(AbstractFileObject.java:618) > at > org.apache.commons.vfs.provider.ftp.FtpFileObject.getChildren(FtpFileObject.java:412) > since AbstractFileObject.getChildren() only checks that *children* instance > is not null -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (VFS-451) Authentication fails using private key
[ https://issues.apache.org/jira/browse/VFS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553832#comment-13553832 ] Gary Gregory commented on VFS-451: -- About code formatting? Compare your patch to what is in trunk and look at how spacing is used. > Authentication fails using private key > -- > > Key: VFS-451 > URL: https://issues.apache.org/jira/browse/VFS-451 > Project: Commons VFS > Issue Type: Bug >Affects Versions: 2.0 > Environment: windows as client > linux as ssh server >Reporter: Marco Ronchi >Assignee: Gary Gregory > Attachments: SftpClientFactory.java.patch, SimpleTest.java, > vfs-patch-2.txt > > > Cannot connect to a ssh server with my private key. > I believe the issue is due to an JSCH bug. > Using the following lines: > session.setPassword(pass); > jsch.addIdentity(identityfile); > instead of > jsch.addIdentity(privateKeyFilePath,password); > authentication fails. > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (VFS-451) Authentication fails using private key
[ https://issues.apache.org/jira/browse/VFS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553683#comment-13553683 ] Marco Ronchi edited comment on VFS-451 at 1/15/13 1:46 PM: --- Hi Gary, I tested the trunk and everything works. I always used ssh-keygen and if you define an empty passphrase, it use as passphrase the OS password. So, it sound to me strange to create Private Keys without passphrase.. but it is possible. Finally, I didn't understand your comments at 01:34. Marco was (Author: ilmarcoronchi): Hi Gary, I tested the trunk and everything works. I always used ssh-keygen and if you define an empty passphrase, it use as passphrase the OS password. So, it sound to me strange to create Private Keys without passphrase.. but it is possible. I didn't understand your comments at 01:34. Finally, I've got a question. In order to configure custom provider, I want to use an external file and not the one (providers.xml ) bound within jar. I use the following code. FileSystemManager fsManager = new StandardFileSystemManager(); ((StandardFileSystemManager)fsManager).setConfiguration("file:///d:/tmp/provider.xml"); ((StandardFileSystemManager)fsManager).init(); Is it the right way? Thanks Marco > Authentication fails using private key > -- > > Key: VFS-451 > URL: https://issues.apache.org/jira/browse/VFS-451 > Project: Commons VFS > Issue Type: Bug >Affects Versions: 2.0 > Environment: windows as client > linux as ssh server >Reporter: Marco Ronchi >Assignee: Gary Gregory > Attachments: SftpClientFactory.java.patch, SimpleTest.java, > vfs-patch-2.txt > > > Cannot connect to a ssh server with my private key. > I believe the issue is due to an JSCH bug. > Using the following lines: > session.setPassword(pass); > jsch.addIdentity(identityfile); > instead of > jsch.addIdentity(privateKeyFilePath,password); > authentication fails. > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (LOGGING-135) Thread-safety improvements
[ https://issues.apache.org/jira/browse/LOGGING-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553774#comment-13553774 ] Thomas Neidhart commented on LOGGING-135: - Yes I agree, also it is not clear why it is protected in the first place. Some of the wrappers have it private (log4j, avalon), while others have it private. Imho, there should be no reason to make it protected, and this should be fixed in future releases. > Thread-safety improvements > -- > > Key: LOGGING-135 > URL: https://issues.apache.org/jira/browse/LOGGING-135 > Project: Commons Logging > Issue Type: Bug >Reporter: Sebb > > The LogKitLogger.logger field is not final or volatile so changes are not > guaranteed to be published. > This includes calls to getLogger(), so two different threads using the same > instance can theoretically both create the logger. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (MATH-876) In v3, Bundle-SymbolicName should be org.apache.commons.math3 (not org.apache.commons.math as currently)
[ https://issues.apache.org/jira/browse/MATH-876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gilles resolved MATH-876. - Resolution: Fixed > In v3, Bundle-SymbolicName should be org.apache.commons.math3 (not > org.apache.commons.math as currently) > > > Key: MATH-876 > URL: https://issues.apache.org/jira/browse/MATH-876 > Project: Commons Math > Issue Type: Bug >Affects Versions: 3.0 >Reporter: Matthew Webber > Labels: build > Fix For: 3.1 > > Attachments: MATH876-bundle-name.patch > > > In Commons Math 3.0, all package names start with > {{org.apache.commons.math3}}, to distinguish them from packages in the > previous (2.2) - issue MATH-444. > However, the name of the bundle itself was not similarly changed - in the > MANIFEST.MF from 3.0.0, we have this line: > {{Bundle-SymbolicName: org.apache.commons.math}} > This should be changed in 3.1 to: > {{Bundle-SymbolicName: org.apache.commons.math3}} > As an example, Apache Commons Lang changed their bundle name when they moved > from v2 to v3 - exactly what I am proposing for Commons Math. > For various reasons, the existing plugin naming is a problem for us in our > environment, where our code uses a mixture of 2.2 and 3.0 classes (there are > too many references to quickly change). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (MATH-929) MultivariateNormalDistribution.density(double[]) returns wrong value when the dimension is odd
[ https://issues.apache.org/jira/browse/MATH-929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gilles resolved MATH-929. - Resolution: Fixed Corrected in revision 1433367. Thanks a lot for the report and fix! A few weeks, Thomas wondered why using "0.5 * x" instead of "x / 2". That's an excellent reason... Unfortunately that instance slipped through my scanner :(. > MultivariateNormalDistribution.density(double[]) returns wrong value when the > dimension is odd > -- > > Key: MATH-929 > URL: https://issues.apache.org/jira/browse/MATH-929 > Project: Commons Math > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Piotr Wydrych >Priority: Critical > Fix For: 3.2 > > Attachments: MultivariateNormalDistribution.java.patch > > > To reproduce: > {code} > Assert.assertEquals(0.398942280401433, new MultivariateNormalDistribution(new > double[]{0}, new double[][]{{1}}).density(new double[]{0}), 1e-15); > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MATH-929) MultivariateNormalDistribution.density(double[]) returns wrong value when the dimension is odd
[ https://issues.apache.org/jira/browse/MATH-929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gilles updated MATH-929: Priority: Critical (was: Blocker) > MultivariateNormalDistribution.density(double[]) returns wrong value when the > dimension is odd > -- > > Key: MATH-929 > URL: https://issues.apache.org/jira/browse/MATH-929 > Project: Commons Math > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Piotr Wydrych >Priority: Critical > Attachments: MultivariateNormalDistribution.java.patch > > > To reproduce: > {code} > Assert.assertEquals(0.398942280401433, new MultivariateNormalDistribution(new > double[]{0}, new double[][]{{1}}).density(new double[]{0}), 1e-15); > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MATH-929) MultivariateNormalDistribution.density(double[]) returns wrong value when the dimension is odd
[ https://issues.apache.org/jira/browse/MATH-929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gilles updated MATH-929: Fix Version/s: 3.2 > MultivariateNormalDistribution.density(double[]) returns wrong value when the > dimension is odd > -- > > Key: MATH-929 > URL: https://issues.apache.org/jira/browse/MATH-929 > Project: Commons Math > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Piotr Wydrych >Priority: Critical > Fix For: 3.2 > > Attachments: MultivariateNormalDistribution.java.patch > > > To reproduce: > {code} > Assert.assertEquals(0.398942280401433, new MultivariateNormalDistribution(new > double[]{0}, new double[][]{{1}}).density(new double[]{0}), 1e-15); > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (MATH-924) new multivariate vector optimizers cannot be used with large number of weights
[ https://issues.apache.org/jira/browse/MATH-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gilles resolved MATH-924. - Resolution: Fixed > new multivariate vector optimizers cannot be used with large number of weights > -- > > Key: MATH-924 > URL: https://issues.apache.org/jira/browse/MATH-924 > Project: Commons Math > Issue Type: Bug >Reporter: Luc Maisonobe >Priority: Critical > Fix For: 3.1.1 > > Attachments: MATH-924 > > > When using the Weigth class to pass a large number of weights to multivariate > vector optimizers, an nxn full matrix is created (and copied) when a n > elements vector is used. This exhausts memory when n is large. > This happens for example when using curve fitters (even simple curve fitters > like polynomial ones for low degree) with large number of points. I > encountered this with curve fitting on 41200 points, which created a matrix > with 1.7 billion elements. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MATH-929) MultivariateNormalDistribution.density(double[]) returns wrong value when the dimension is odd
[ https://issues.apache.org/jira/browse/MATH-929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Wydrych updated MATH-929: --- Attachment: MultivariateNormalDistribution.java.patch > MultivariateNormalDistribution.density(double[]) returns wrong value when the > dimension is odd > -- > > Key: MATH-929 > URL: https://issues.apache.org/jira/browse/MATH-929 > Project: Commons Math > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Piotr Wydrych >Priority: Blocker > Attachments: MultivariateNormalDistribution.java.patch > > > To reproduce: > {code} > Assert.assertEquals(0.398942280401433, new MultivariateNormalDistribution(new > double[]{0}, new double[][]{{1}}).density(new double[]{0}), 1e-15); > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (MATH-929) MultivariateNormalDistribution.density(double[]) returns wrong value when the dimension is odd
[ https://issues.apache.org/jira/browse/MATH-929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Wydrych updated MATH-929: --- Description: To reproduce: {code} Assert.assertEquals(0.398942280401433, new MultivariateNormalDistribution(new double[]{0}, new double[][]{{1}}).density(new double[]{0}), 1e-15); {code} was: {code} Assert.assertEquals(0.398942280401433, new MultivariateNormalDistribution(new double[]{0}, new double[][]{{1}}).density(new double[]{0}), 1e-15); {code} > MultivariateNormalDistribution.density(double[]) returns wrong value when the > dimension is odd > -- > > Key: MATH-929 > URL: https://issues.apache.org/jira/browse/MATH-929 > Project: Commons Math > Issue Type: Bug >Affects Versions: 3.1.1 >Reporter: Piotr Wydrych >Priority: Blocker > > To reproduce: > {code} > Assert.assertEquals(0.398942280401433, new MultivariateNormalDistribution(new > double[]{0}, new double[][]{{1}}).density(new double[]{0}), 1e-15); > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (MATH-929) MultivariateNormalDistribution.density(double[]) returns wrong value when the dimension is odd
Piotr Wydrych created MATH-929: -- Summary: MultivariateNormalDistribution.density(double[]) returns wrong value when the dimension is odd Key: MATH-929 URL: https://issues.apache.org/jira/browse/MATH-929 Project: Commons Math Issue Type: Bug Affects Versions: 3.1.1 Reporter: Piotr Wydrych Priority: Blocker {code} Assert.assertEquals(0.398942280401433, new MultivariateNormalDistribution(new double[]{0}, new double[][]{{1}}).density(new double[]{0}), 1e-15); {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (LOGGING-135) Thread-safety improvements
[ https://issues.apache.org/jira/browse/LOGGING-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553697#comment-13553697 ] Sebb commented on LOGGING-135: -- I agree it's unlikely to occur, however that does not mean it cannot occur. And if it does cause problems, debugging is likely to be extremely difficult. It may well be that all logger implementations return a singleton. But writing to the protected logger field can still cause issues. At the very least the issue should be clearly documented. Making the variable volatile would solve the publication issue. Ideally the variables should be private and final, but that would break compatibility. If logging is ever rewritten, it should use immutable classes as far as possible, and certainly no mutable protected or public fields. > Thread-safety improvements > -- > > Key: LOGGING-135 > URL: https://issues.apache.org/jira/browse/LOGGING-135 > Project: Commons Logging > Issue Type: Bug >Reporter: Sebb > > The LogKitLogger.logger field is not final or volatile so changes are not > guaranteed to be published. > This includes calls to getLogger(), so two different threads using the same > instance can theoretically both create the logger. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (VFS-451) Authentication fails using private key
[ https://issues.apache.org/jira/browse/VFS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553683#comment-13553683 ] Marco Ronchi commented on VFS-451: -- Hi Gary, I tested the trunk and everything works. I always used ssh-keygen and if you define an empty passphrase, it use as passphrase the OS password. So, it sound to me strange to create Private Keys without passphrase.. but it is possible. I didn't understand your comments at 01:34. Finally, I've got a question. In order to configure custom provider, I want to use an external file and not the one (providers.xml ) bound within jar. I use the following code. FileSystemManager fsManager = new StandardFileSystemManager(); ((StandardFileSystemManager)fsManager).setConfiguration("file:///d:/tmp/provider.xml"); ((StandardFileSystemManager)fsManager).init(); Is it the right way? Thanks Marco > Authentication fails using private key > -- > > Key: VFS-451 > URL: https://issues.apache.org/jira/browse/VFS-451 > Project: Commons VFS > Issue Type: Bug >Affects Versions: 2.0 > Environment: windows as client > linux as ssh server >Reporter: Marco Ronchi >Assignee: Gary Gregory > Attachments: SftpClientFactory.java.patch, SimpleTest.java, > vfs-patch-2.txt > > > Cannot connect to a ssh server with my private key. > I believe the issue is due to an JSCH bug. > Using the following lines: > session.setPassword(pass); > jsch.addIdentity(identityfile); > instead of > jsch.addIdentity(privateKeyFilePath,password); > authentication fails. > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (JCS-103) MaxMemoryIdleTimeSeconds default value is wrongly documented
Pavel Novak created JCS-103: --- Summary: MaxMemoryIdleTimeSeconds default value is wrongly documented Key: JCS-103 URL: https://issues.apache.org/jira/browse/JCS-103 Project: Commons JCS Issue Type: Bug Components: Documentation Affects Versions: jcs-1.3 Environment: N/A Reporter: Pavel Novak The default value of "-1" for MaxMemoryIdleTimeSeconds is wrong at http://commons.apache.org/jcs/RegionProperties.html Looking at the code, the default is actually 7200, which means 2 hours. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (LOGGING-119) deadlock on re-registration of logger
[ https://issues.apache.org/jira/browse/LOGGING-119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553652#comment-13553652 ] Thomas Neidhart commented on LOGGING-119: - Looking at the patch, the original deadlock problem should be fixed by it, but there are other flaws in WeakHashtable that should be addressed: * put and remove have the following code snippet: {noformat} // for performance reasons, only purge every // MAX_CHANGES_BEFORE_PURGE times if (changeCount++ > MAX_CHANGES_BEFORE_PURGE) { purge(); changeCount = 0; } // do a partial purge more often else if (changeCount % PARTIAL_PURGE_COUNT == 0) { purgeOne(); } {noformat} which is not-synchronized, thus the changeCount check may be successful for two concurrent threads at the same time, calling purge / purgeOne multiple times. * the underlying Hashtable is synchronized, while none of the overriden methods are > deadlock on re-registration of logger > - > > Key: LOGGING-119 > URL: https://issues.apache.org/jira/browse/LOGGING-119 > Project: Commons Logging > Issue Type: Bug >Affects Versions: 1.1.1 > Environment: Java 1.5, Windows >Reporter: Nitzan Niv > Attachments: BugDeadlock.java, Patch-WeakHashtable-1.1.1.txt > > > Reached a deadlock inside common-logging while concurrently re-deploying 2 > WARs. > In each WAR there is an attempt to get a logger: > private final Log logger = LogFactory.getLog(ContextLoader.class); > Thread dump: > [deadlocked thread] Thread-96: > - > Thread 'Thread-96' is waiting to acquire lock > 'java.lang.ref.ReferenceQueue@5266e0' that is held by thread 'Thread-102' > Stack trace: > > > org.apache.commons.logging.impl.WeakHashtable.purge(WeakHashtable.java:323) > > org.apache.commons.logging.impl.WeakHashtable.rehash(WeakHashtable.java:312) > java.util.Hashtable.put(Hashtable.java:414) > > org.apache.commons.logging.impl.WeakHashtable.put(WeakHashtable.java:242) > > org.apache.commons.logging.LogFactory.cacheFactory(LogFactory.java:1004) > org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:657) > org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685) > > org.springframework.web.context.ContextLoader.(ContextLoader.java:145) > > [deadlocked thread] Thread-102: > -- > Thread 'Thread-102' is waiting to acquire lock > 'org.apache.commons.logging.impl. > WeakHashtable@1e02138' that is held by thread 'Thread-96' > Stack trace: > > java.util.Hashtable.remove(Hashtable.java:437) > > org.apache.commons.logging.impl.WeakHashtable.purgeOne(WeakHashtable.java:338) > > org.apache.commons.logging.impl.WeakHashtable.put(WeakHashtable.java:238) > > org.apache.commons.logging.LogFactory.cacheFactory(LogFactory.java:1004) > org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:657) > org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685) > > org.springframework.web.context.ContextLoader.(ContextLoader.java:145) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (LOGGING-135) Thread-safety improvements
[ https://issues.apache.org/jira/browse/LOGGING-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553623#comment-13553623 ] Thomas Neidhart commented on LOGGING-135: - I do not think the respective logger will be created multiple times, but retrieved from the underlying logging system. In case the underlying system is flawed (e.g. not returning the same instance everytime it is called with the same name), this may create problems in the commons-logging wrapper, but otherwise the synchronisation may be a bottleneck. This only relates to the logger fields, SimpleLog may be a different case. > Thread-safety improvements > -- > > Key: LOGGING-135 > URL: https://issues.apache.org/jira/browse/LOGGING-135 > Project: Commons Logging > Issue Type: Bug >Reporter: Sebb > > The LogKitLogger.logger field is not final or volatile so changes are not > guaranteed to be published. > This includes calls to getLogger(), so two different threads using the same > instance can theoretically both create the logger. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CLI-225) Special properties option (-Dproperty=value) handled improperly
[ https://issues.apache.org/jira/browse/CLI-225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553604#comment-13553604 ] Thomas Neidhart commented on CLI-225: - The only difference I see is that also properties without value are accepted as arguments: {noformat} -Dproperty {noformat} which could lead to problems when handling the parsed option values as the returned number will not be a multiple of 2. > Special properties option (-Dproperty=value) handled improperly > --- > > Key: CLI-225 > URL: https://issues.apache.org/jira/browse/CLI-225 > Project: Commons CLI > Issue Type: Bug > Components: CLI-1.x >Affects Versions: 1.2 >Reporter: Alexey Tsvetkov > > In CLI 1.2 the special properties option (-Dproperty=value) is handled > improperly. In GnuParser.java from line 80 is as follows: > {code} > if (opt.indexOf('=') != -1 && > options.hasOption(opt.substring(0, opt.indexOf('=' > { > // the format is --foo=value or -foo=value > tokens.add(arg.substring(0, arg.indexOf('='))); // > --foo > tokens.add(arg.substring(arg.indexOf('=') + 1)); // > value > } > else if (options.hasOption(arg.substring(0, 2))) > { > // the format is a special properties option > (-Dproperty=value) > tokens.add(arg.substring(0, 2)); // -D > tokens.add(arg.substring(2)); // property=value > } > {code} > , but should be: > {code} > if (opt.indexOf('=') != -1) > { > if (options.hasOption(opt.substring(0, > opt.indexOf('=' > { > // the format is --foo=value or -foo=value > tokens.add(arg.substring(0, arg.indexOf('='))); > // --foo > tokens.add(arg.substring(arg.indexOf('=') + > 1)); // value > } > else if (options.hasOption(arg.substring(0, 2))) > { > // the format is a special properties option > (-Dproperty=value) > tokens.add(arg.substring(0, 2)); // -D > tokens.add(arg.substring(2)); // property=value > } > } > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira