[jira] [Updated] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-06-07 Thread Claude Warren (Jira)


 [ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claude Warren updated COLLECTIONS-855:
--
Description: 
The EnhancedDoubleHasher currently computes the hash with the cube component 
lagging by 1:
{noformat}
hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
bits){noformat}
Correct this to the intended:
{noformat}
hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
This is a simple change in the current controlling loop from:
{code:java}
for (int i = 0; i < k; i++) { {code}
to:
{code:java}
for (int i = 1; i <= k; i++) { {code}
 

Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list (see 
[https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).
Pull request: https://github.com/apache/commons-collections/pull/501

  was:
The EnhancedDoubleHasher currently computes the hash with the cube component 
lagging by 1:
{noformat}
hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
bits){noformat}
Correct this to the intended:
{noformat}
hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
This is a simple change in the current controlling loop from:
{code:java}
for (int i = 0; i < k; i++) { {code}
to:
{code:java}
for (int i = 1; i <= k; i++) { {code}
 

Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list (see 
[https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).


> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Assignee: Claude Warren
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).
> Pull request: https://github.com/apache/commons-collections/pull/501



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-06-07 Thread Claude Warren (Jira)


 [ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claude Warren updated COLLECTIONS-855:
--
Assignee: Claude Warren

> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Assignee: Claude Warren
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IO-856) ListFiles should not fail on vanishing files

2024-06-06 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852969#comment-17852969
 ] 

Gary D. Gregory commented on IO-856:


Hello [~thomas.hart...@gmail.com] 

Thank you for your update.

I added a unit test to reproduce this kind of issue: A file deletion between 
creating the stream and collecting it. It passes on Windows, Ubuntu, and macOS, 
using Java 11, 17, and 21 (see https://github.com/apache/commons-io/actions).

The new test is here: 
[https://github.com/apache/commons-io/blob/97c4803e1c8f100756d24eff4fdfd631a08534dc/src/test/java/org/apache/commons/io/FileUtilsListFilesTest.java#L209-L223]

The best path forward would be for you to create a PR with a failing unit test 
we can debug.

You could try updating to the current version of Java 17 and see if that helps.

TY!

> ListFiles should not fail on vanishing files
> 
>
> Key: IO-856
> URL: https://issues.apache.org/jira/browse/IO-856
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.16.1
>Reporter: Thomas Hartwig
>Assignee: Gary D. Gregory
>Priority: Major
>
> ListFiles crashes when vanishing files are involved while listing, ListFiles 
> should simply list, the application should care of if files are not existent 
> any more:
> 
> java.io.UncheckedIOException: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87)
>     at 
> java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103)
>     at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
>     at 
> java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845)
>     at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
>     at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
>     at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
>     at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>     at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314)
>     at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39)
>     at java.base/java.lang.Thread.run(Thread.java:842)
> Caused by: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>     at 
> java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>     at 
> java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148)
>     at 
> java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>     at java.base/java.nio.file.Files.readAttributes(Files.java:1851)
>     at 
> java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226)
>     at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277)
>     at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374)
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83)
>     ... 12 more
> 
> Use this to reproduce:
> 
> package test;
> import org.apache.commons.io.FileUtils;
> import java.io.BufferedOutputStream;
> import java.io.File;
> import java.io.FileOutputStream;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Path;
> import java.util.Collection;
> import java.util.UUID;
> public class ApacheBug {
> public static void main(String[] args) {
> // create random directory in tmp, create the directory if it does not exist
> final File dir = FileUtils.getTempDirectory();
> if (!dir.exists()) {
> if (!dir.mkdirs()) {
> throw new RuntimeException("could not create image file path: " + 
> dir.getAbsolutePath());
> }
> }
> // create ra

[jira] [Commented] (IO-856) ListFiles should not fail on vanishing files

2024-06-06 Thread Thomas Hartwig (Jira)


[ 
https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852938#comment-17852938
 ] 

Thomas Hartwig commented on IO-856:
---

Distributor ID: Ubuntu 
Description:    Ubuntu 22.04.4 LTS 
Release:    22.04 
Codename:   jammy


java.vm.vendor: Oracle Corporation
sun.arch.data.model: 64
java.vendor.url: https://java.oracle.com/
user.timezone: Europe/Berlin
java.vm.specification.version: 17
os.name: Linux
sun.java.launcher: SUN_STANDARD
user.country: US
sun.boot.library.path: /opt/graalvm-jdk-17.0.10+11.1/lib
jdk.debug: release
sun.cpu.endian: little
user.home: /home/th
user.language: en
java.specification.vendor: Oracle Corporation
java.vm.specification.vendor: Oracle Corporation
java.specification.name: Java Platform API Specification
jdk.module.main.class: test.ApacheBug
jdk.module.main: com.itth.test
sun.management.compiler: HotSpot 64-Bit Tiered Compilers
java.runtime.version: 17.0.10+11-LTS-jvmci-23.0-b27
user.name: th
path.separator: :
os.version: 5.15.0-107-lowlatency
java.runtime.name: Java(TM) SE Runtime Environment
file.encoding: UTF-8
java.vm.name: Java HotSpot(TM) 64-Bit Server VM
java.vendor.version: Oracle GraalVM 17.0.10+11.1
java.vendor.url.bug: https://bugreport.java.com/bugreport/
java.io.tmpdir: /tmp
java.version: 17.0.10
user.dir: /home/th/dev/ai/idea-projects/uvis
os.arch: amd64
java.vm.specification.name: Java Virtual Machine Specification
native.encoding: UTF-8
java.library.path: /usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
java.vm.info: mixed mode, sharing
java.vendor: Oracle Corporation
java.vm.version: 17.0.10+11-LTS-jvmci-23.0-b27
sun.io.unicode.encoding: UnicodeLittle
java.class.version: 61.0

> ListFiles should not fail on vanishing files
> 
>
> Key: IO-856
> URL: https://issues.apache.org/jira/browse/IO-856
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.16.1
>Reporter: Thomas Hartwig
>Assignee: Gary D. Gregory
>Priority: Major
>
> ListFiles crashes when vanishing files are involved while listing, ListFiles 
> should simply list, the application should care of if files are not existent 
> any more:
> 
> java.io.UncheckedIOException: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87)
>     at 
> java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103)
>     at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
>     at 
> java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845)
>     at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
>     at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
>     at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
>     at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>     at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314)
>     at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39)
>     at java.base/java.lang.Thread.run(Thread.java:842)
> Caused by: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>     at 
> java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>     at 
> java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148)
>     at 
> java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>     at java.base/java.nio.file.Files.readAttributes(Files.java:1851)
>     at 
> java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226)
>     at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277)
>     at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374)
>     at 
> java.base/java.nio.file.FileTreeIter

[jira] [Commented] (IO-856) ListFiles should not fail on vanishing files

2024-06-06 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852930#comment-17852930
 ] 

Gary D. Gregory commented on IO-856:


Hello [~thomas.hart...@gmail.com] 

Thank you for your report.

Please specify:
 * What OS?
 * What Java version?

> ListFiles should not fail on vanishing files
> 
>
> Key: IO-856
> URL: https://issues.apache.org/jira/browse/IO-856
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.16.1
>Reporter: Thomas Hartwig
>Assignee: Gary D. Gregory
>Priority: Major
>
> ListFiles crashes when vanishing files are involved while listing, ListFiles 
> should simply list, the application should care of if files are not existent 
> any more:
> 
> java.io.UncheckedIOException: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87)
>     at 
> java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103)
>     at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
>     at 
> java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845)
>     at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
>     at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
>     at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
>     at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>     at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314)
>     at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39)
>     at java.base/java.lang.Thread.run(Thread.java:842)
> Caused by: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>     at 
> java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>     at 
> java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148)
>     at 
> java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>     at java.base/java.nio.file.Files.readAttributes(Files.java:1851)
>     at 
> java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226)
>     at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277)
>     at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374)
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83)
>     ... 12 more
> 
> Use this to reproduce:
> 
> package test;
> import org.apache.commons.io.FileUtils;
> import java.io.BufferedOutputStream;
> import java.io.File;
> import java.io.FileOutputStream;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Path;
> import java.util.Collection;
> import java.util.UUID;
> public class ApacheBug {
> public static void main(String[] args) {
> // create random directory in tmp, create the directory if it does not exist
> final File dir = FileUtils.getTempDirectory();
> if (!dir.exists()) {
> if (!dir.mkdirs()) {
> throw new RuntimeException("could not create image file path: " + 
> dir.getAbsolutePath());
> }
> }
> // create random file in the directory
> new Thread(() -> {
> try {
> while (true) {
> final File file = Path.of(dir.getAbsolutePath(), UUID.randomUUID().toString() 
> + ".png").toFile();
> new BufferedOutputStream(new 
> FileOutputStream(file)).write("TEST".getBytes(StandardCharsets.UTF_8));
> file.delete();
> }
> } catch (IOException e) {
> e.printStackTrace();
> }
> }).start();
> new Thread(() -> {
> try {
> while (true) {
> final Collection files = FileUtils.listFiles(dir, new String[]\{"png"}, 
> true);
> System.out.println(files.size());
> }
> } catch (Exception e) {
> e.printStackTrace();
> }
> }).start();
> try {
> Thread.sleep(1);
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> }
> }
> }
> 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IO-856) ListFiles should not fail on vanishing files

2024-06-06 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852930#comment-17852930
 ] 

Gary D. Gregory edited comment on IO-856 at 6/6/24 7:56 PM:


Hello [~thomas.hart...@gmail.com] 

Thank you for your report.

Please specify:
 * What OS? Some UNIX variant it seems.
 * What Java version?


was (Author: garydgregory):
Hello [~thomas.hart...@gmail.com] 

Thank you for your report.

Please specify:
 * What OS?
 * What Java version?

> ListFiles should not fail on vanishing files
> 
>
> Key: IO-856
> URL: https://issues.apache.org/jira/browse/IO-856
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.16.1
>Reporter: Thomas Hartwig
>Assignee: Gary D. Gregory
>Priority: Major
>
> ListFiles crashes when vanishing files are involved while listing, ListFiles 
> should simply list, the application should care of if files are not existent 
> any more:
> 
> java.io.UncheckedIOException: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87)
>     at 
> java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103)
>     at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
>     at 
> java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845)
>     at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
>     at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
>     at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
>     at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>     at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314)
>     at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39)
>     at java.base/java.lang.Thread.run(Thread.java:842)
> Caused by: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>     at 
> java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>     at 
> java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148)
>     at 
> java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>     at java.base/java.nio.file.Files.readAttributes(Files.java:1851)
>     at 
> java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226)
>     at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277)
>     at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374)
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83)
>     ... 12 more
> 
> Use this to reproduce:
> 
> package test;
> import org.apache.commons.io.FileUtils;
> import java.io.BufferedOutputStream;
> import java.io.File;
> import java.io.FileOutputStream;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Path;
> import java.util.Collection;
> import java.util.UUID;
> public class ApacheBug {
> public static void main(String[] args) {
> // create random directory in tmp, create the directory if it does not exist
> final File dir = FileUtils.getTempDirectory();
> if (!dir.exists()) {
> if (!dir.mkdirs()) {
> throw new RuntimeException("could not create image file path: " + 
> dir.getAbsolutePath());
> }
> }
> // create random file in the directory
> new Thread(() -> {
> try {
> while (true) {
> final File file = Path.of(dir.getAbsolutePath(), UUID.randomUUID().toString() 
> + ".png").toFile();
> new BufferedOutputStream(new 
> FileOutputStream(file)).write("TEST".getBytes(StandardCharsets.UTF_8));
> file.

[jira] [Updated] (IO-856) ListFiles should not fail on vanishing files

2024-06-06 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated IO-856:
---
Assignee: Gary D. Gregory

> ListFiles should not fail on vanishing files
> 
>
> Key: IO-856
> URL: https://issues.apache.org/jira/browse/IO-856
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.16.1
>Reporter: Thomas Hartwig
>Assignee: Gary D. Gregory
>Priority: Major
>
> ListFiles crashes when vanishing files are involved while listing, ListFiles 
> should simply list, the application should care of if files are not existent 
> any more:
> 
> java.io.UncheckedIOException: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87)
>     at 
> java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103)
>     at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
>     at 
> java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845)
>     at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
>     at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
>     at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
>     at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>     at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025)
>     at 
> org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314)
>     at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39)
>     at java.base/java.lang.Thread.run(Thread.java:842)
> Caused by: java.nio.file.NoSuchFileException: 
> /tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
>     at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
>     at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>     at 
> java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>     at 
> java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148)
>     at 
> java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>     at java.base/java.nio.file.Files.readAttributes(Files.java:1851)
>     at 
> java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226)
>     at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277)
>     at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374)
>     at 
> java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83)
>     ... 12 more
> 
> Use this to reproduce:
> 
> package test;
> import org.apache.commons.io.FileUtils;
> import java.io.BufferedOutputStream;
> import java.io.File;
> import java.io.FileOutputStream;
> import java.io.IOException;
> import java.nio.charset.StandardCharsets;
> import java.nio.file.Path;
> import java.util.Collection;
> import java.util.UUID;
> public class ApacheBug {
> public static void main(String[] args) {
> // create random directory in tmp, create the directory if it does not exist
> final File dir = FileUtils.getTempDirectory();
> if (!dir.exists()) {
> if (!dir.mkdirs()) {
> throw new RuntimeException("could not create image file path: " + 
> dir.getAbsolutePath());
> }
> }
> // create random file in the directory
> new Thread(() -> {
> try {
> while (true) {
> final File file = Path.of(dir.getAbsolutePath(), UUID.randomUUID().toString() 
> + ".png").toFile();
> new BufferedOutputStream(new 
> FileOutputStream(file)).write("TEST".getBytes(StandardCharsets.UTF_8));
> file.delete();
> }
> } catch (IOException e) {
> e.printStackTrace();
> }
> }).start();
> new Thread(() -> {
> try {
> while (true) {
> final Collection files = FileUtils.listFiles(dir, new String[]\{"png"}, 
> true);
> System.out.println(files.size());
> }
> } catch (Exception e) {
> e.printStackTrace();
> }
> }).start();
> try {
> Thread.sleep(1);
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> }
> }
> }
> 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IO-856) ListFiles should not fail on vanishing files

2024-06-06 Thread Thomas Hartwig (Jira)
Thomas Hartwig created IO-856:
-

 Summary: ListFiles should not fail on vanishing files
 Key: IO-856
 URL: https://issues.apache.org/jira/browse/IO-856
 Project: Commons IO
  Issue Type: Bug
  Components: Utilities
Affects Versions: 2.16.1
Reporter: Thomas Hartwig


ListFiles crashes when vanishing files are involved while listing, ListFiles 
should simply list, the application should care of if files are not existent 
any more:


java.io.UncheckedIOException: java.nio.file.NoSuchFileException: 
/tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
    at 
java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87)
    at 
java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103)
    at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
    at 
java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845)
    at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
    at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
    at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
    at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at 
java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
    at 
org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.toList(FileUtils.java:3025)
    at 
org.apache.commons.io@2.16.1/org.apache.commons.io.FileUtils.listFiles(FileUtils.java:2314)
    at com.itth.test/test.ApacheBug.lambda$main$1(ApacheBug.java:39)
    at java.base/java.lang.Thread.run(Thread.java:842)
Caused by: java.nio.file.NoSuchFileException: 
/tmp/20b50a15-b84e-4a9a-953e-223452dac994/a914fa55-50f7-4de0-8ca6-1fd84f10b29a.png
    at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
    at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
    at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
    at 
java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
    at 
java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:148)
    at 
java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
    at java.base/java.nio.file.Files.readAttributes(Files.java:1851)
    at 
java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226)
    at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277)
    at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374)
    at 
java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:83)
    ... 12 more



Use this to reproduce:


package test;

import org.apache.commons.io.FileUtils;

import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Path;
import java.util.Collection;
import java.util.UUID;

public class ApacheBug {
public static void main(String[] args) {
// create random directory in tmp, create the directory if it does not exist
final File dir = FileUtils.getTempDirectory();
if (!dir.exists()) {
if (!dir.mkdirs()) {
throw new RuntimeException("could not create image file path: " + 
dir.getAbsolutePath());
}
}

// create random file in the directory
new Thread(() -> {
try {
while (true) {
final File file = Path.of(dir.getAbsolutePath(), UUID.randomUUID().toString() + 
".png").toFile();
new BufferedOutputStream(new 
FileOutputStream(file)).write("TEST".getBytes(StandardCharsets.UTF_8));
file.delete();
}
} catch (IOException e) {
e.printStackTrace();
}
}).start();
new Thread(() -> {
try {
while (true) {
final Collection files = FileUtils.listFiles(dir, new String[]\{"png"}, 
true);
System.out.println(files.size());
}
} catch (Exception e) {
e.printStackTrace();
}
}).start();

try {
Thread.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NUMBERS-206) Selection API

2024-06-06 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852711#comment-17852711
 ] 

Alex Herbert edited comment on NUMBERS-206 at 6/6/24 11:10 AM:
---

h2. Saturated Indices

Using the same data as previously (n=5, BM distribution)
|| ||500||1000||5000||1||5||Total||
|ISP|7107387233|7757771325|8899903475|9925539708|10609585958|33690601741|
|SELECT|6308182425|7045016725|8423035575|8798568667|9504493042|30574803392|
|IDP|6023154508|6650004933|8043503983|8634537741|9257381325|29351201166|

Average separation is 100, 50, 10, 5, 1 respectively. The SortJDK method took 
an average of 6338586897 on 6 runs (see previous results). This result shows 
that the selection is a similar speed to a sort when the number of keys is 
500-1000. This many indices is a reasonable upper limit for the use in a QQ 
plot of quantiles.

Some of the BM distributions are easy to sort with a merge sort due to 
ascending/descending runs. The data can be made more difficult by changing the 
benchmark to use uniformly distributed random data. The following shows the 
result of n=5 with 400 samples and 10 repeats of the random indices:
||Method||500||1000||5000||1||Total||
|ISP|4205753375|4660640933|5587189925|5920384433|20373968667|
|SortJDK|4665368408|4690999142|4729445525|4685222425|18771035500|
|SELECT|3729310475|4143622242|4966502342|5275207142|18114642200|
|IDP|3725283483|4105095059|4974550508|5218439467|18023368517|

This result shows selection is a similar speed to a sort when the average 
separation approaches 10 (k=5000).

Note that the selection routine is not optimised for sorting. There are changes 
that could be made to increase performance on saturated indices but these would 
compromise performance for sparse indices. The current settings ensure that the 
performance is never catastrophically bad when the number of indices saturates 
the range.

 


was (Author: alexherbert):
h2. Saturated Indices

Using the same data as previously (n=5, BM distribution)
||Method||500||1000||5000||1||Total||
|ISP|7107387233|7757771325|8899903475|9925539708|33690601741|
|SELECT|6308182425|7045016725|8423035575|8798568667|30574803392|
|IDP|6023154508|6650004933|8043503983|8634537741|29351201166|

Average separation is 100, 50, 10, 5 respectively. The SortJDK method took an 
average of 6338586897 on 6 runs (see previous results). This result shows that 
the selection is a similar speed to a sort when the number of keys is 500-1000. 
This many indices is a reasonable upper limit for the use in a QQ plot of 
quantiles.

Some of the BM distributions are easy to sort with a merge sort due to 
ascending/descending runs. The data can be made more difficult by changing the 
benchmark to use uniformly distributed random data. The following shows the 
result of n=5 with 400 samples and 10 repeats of the random indices:
||Method||500||1000||5000||1||Total||
|ISP|4205753375|4660640933|5587189925|5920384433|20373968667|
|SortJDK|4665368408|4690999142|4729445525|4685222425|18771035500|
|SELECT|3729310475|4143622242|4966502342|5275207142|18114642200|
|IDP|3725283483|4105095059|4974550508|5218439467|18023368517|

This result shows selection is a similar speed to a sort when the average 
separation approaches 10 (k=5000).

Note that the selection routine is not optimised for sorting. There are changes 
that could be made to increase performance on saturated indices but these would 
compromise performance for sparse indices. The current settings ensure that the 
performance is never catastrophically bad when the number of indices saturates 
the range.

 

> Selection API
> -
>
> Key: NUMBERS-206
> URL: https://issues.apache.org/jira/browse/NUMBERS-206
> Project: Commons Numbers
>  Issue Type: New Feature
>  Components: arrays
>Reporter: Alex Herbert
>Priority: Major
>
> Create a selection API to select the k-th largest element in an array. This 
> places at k the same value that would be at k in a fully sorted array.
> {code:java}
> public final class Selection {
> public static void select(double[] a, int k);
> public static void select(double[] a, int from, int to, int k);
> public static void select(double[] a, int[] k);
> public static void select(double[] a, int from, int to, int[] k);
> // Extend to other primitive data types that are not easily sorted (e.g. 
> long, float, int)
> {code}
> Note: This API will support multiple points (int[] k) for use in quantile 
> estimation of array data by interpolation of neighbouring values (see 
> STATISTICS-85).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NUMBERS-206) Selection API

2024-06-06 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852711#comment-17852711
 ] 

Alex Herbert edited comment on NUMBERS-206 at 6/6/24 10:52 AM:
---

h2. Saturated Indices

Using the same data as previously (n=5, BM distribution)
||Method||500||1000||5000||1||Total||
|ISP|7107387233|7757771325|8899903475|9925539708|33690601741|
|SELECT|6308182425|7045016725|8423035575|8798568667|30574803392|
|IDP|6023154508|6650004933|8043503983|8634537741|29351201166|

Average separation is 100, 50, 10, 5 respectively. The SortJDK method took an 
average of 6338586897 on 6 runs (see previous results). This result shows that 
the selection is a similar speed to a sort when the number of keys is 500-1000. 
This many indices is a reasonable upper limit for the use in a QQ plot of 
quantiles.

Some of the BM distributions are easy to sort with a merge sort due to 
ascending/descending runs. The data can be made more difficult by changing the 
benchmark to use uniformly distributed random data. The following shows the 
result of n=5 with 400 samples and 10 repeats of the random indices:
||Method||500||1000||5000||1||Total||
|ISP|4205753375|4660640933|5587189925|5920384433|20373968667|
|SortJDK|4665368408|4690999142|4729445525|4685222425|18771035500|
|SELECT|3729310475|4143622242|4966502342|5275207142|18114642200|
|IDP|3725283483|4105095059|4974550508|5218439467|18023368517|

This result shows selection is a similar speed to a sort when the average 
separation approaches 10 (k=5000).

Note that the selection routine is not optimised for sorting. There are changes 
that could be made to increase performance on saturated indices but these would 
compromise performance for sparse indices. The current settings ensure that the 
performance is never catastrophically bad when the number of indices saturates 
the range.

 


was (Author: alexherbert):
h2. Saturated Indices

Using the same data as previously (n=5, BM distribution)
||Row Labels||500||1000||5000||1||Total||
|ISP|7107387233|7757771325|8899903475|9925539708|33690601741|
|SELECT|6308182425|7045016725|8423035575|8798568667|30574803392|
|IDP|6023154508|6650004933|8043503983|8634537741|29351201166|

Average separation is 100, 50, 10, 5 respectively. The SortJDK method took an 
average of 6338586897 on 6 runs. This result shows that the selection is a 
similar speed to a sort when the number of keys is 500-1000. This many indices 
is a reasonable upper limit for the use in a QQ plot of quantiles.


 

> Selection API
> -
>
> Key: NUMBERS-206
> URL: https://issues.apache.org/jira/browse/NUMBERS-206
> Project: Commons Numbers
>  Issue Type: New Feature
>  Components: arrays
>Reporter: Alex Herbert
>Priority: Major
>
> Create a selection API to select the k-th largest element in an array. This 
> places at k the same value that would be at k in a fully sorted array.
> {code:java}
> public final class Selection {
> public static void select(double[] a, int k);
> public static void select(double[] a, int from, int to, int k);
> public static void select(double[] a, int[] k);
> public static void select(double[] a, int from, int to, int[] k);
> // Extend to other primitive data types that are not easily sorted (e.g. 
> long, float, int)
> {code}
> Note: This API will support multiple points (int[] k) for use in quantile 
> estimation of array data by interpolation of neighbouring values (see 
> STATISTICS-85).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NUMBERS-206) Selection API

2024-06-06 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852711#comment-17852711
 ] 

Alex Herbert edited comment on NUMBERS-206 at 6/6/24 10:16 AM:
---

h2. Saturated Indices

Using the same data as previously (n=5, BM distribution)
||Row Labels||500||1000||5000||1||Total||
|ISP|7107387233|7757771325|8899903475|9925539708|33690601741|
|SELECT|6308182425|7045016725|8423035575|8798568667|30574803392|
|IDP|6023154508|6650004933|8043503983|8634537741|29351201166|

Average separation is 100, 50, 10, 5 respectively. The SortJDK method took an 
average of 6338586897 on 6 runs. This result shows that the selection is a 
similar speed to a sort when the number of keys is 500-1000. This many indices 
is a reasonable upper limit for the use in a QQ plot of quantiles.


 


was (Author: alexherbert):
h2. Saturated Indices

Using the same data as previously (n=5, BM distribution)
||Row Labels||500||1000||5000||1||Total||
|ISP|7107387233|7757771325|8899903475|9925539708|33690601741|
|SELECT|6308182425|7045016725|8423035575|8798568667|30574803392|
|IDP|6023154508|6650004933|8043503983|8634537741|29351201166|

sedf
 

> Selection API
> -
>
> Key: NUMBERS-206
> URL: https://issues.apache.org/jira/browse/NUMBERS-206
> Project: Commons Numbers
>  Issue Type: New Feature
>  Components: arrays
>Reporter: Alex Herbert
>Priority: Major
>
> Create a selection API to select the k-th largest element in an array. This 
> places at k the same value that would be at k in a fully sorted array.
> {code:java}
> public final class Selection {
> public static void select(double[] a, int k);
> public static void select(double[] a, int from, int to, int k);
> public static void select(double[] a, int[] k);
> public static void select(double[] a, int from, int to, int[] k);
> // Extend to other primitive data types that are not easily sorted (e.g. 
> long, float, int)
> {code}
> Note: This API will support multiple points (int[] k) for use in quantile 
> estimation of array data by interpolation of neighbouring values (see 
> STATISTICS-85).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NUMBERS-206) Selection API

2024-06-06 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852711#comment-17852711
 ] 

Alex Herbert commented on NUMBERS-206:
--

h2. Saturated Indices

Using the same data as previously (n=5, BM distribution)
||Row Labels||500||1000||5000||1||Total||
|ISP|7107387233|7757771325|8899903475|9925539708|33690601741|
|SELECT|6308182425|7045016725|8423035575|8798568667|30574803392|
|IDP|6023154508|6650004933|8043503983|8634537741|29351201166|

sedf
 

> Selection API
> -
>
> Key: NUMBERS-206
> URL: https://issues.apache.org/jira/browse/NUMBERS-206
> Project: Commons Numbers
>  Issue Type: New Feature
>  Components: arrays
>Reporter: Alex Herbert
>Priority: Major
>
> Create a selection API to select the k-th largest element in an array. This 
> places at k the same value that would be at k in a fully sorted array.
> {code:java}
> public final class Selection {
> public static void select(double[] a, int k);
> public static void select(double[] a, int from, int to, int k);
> public static void select(double[] a, int[] k);
> public static void select(double[] a, int from, int to, int[] k);
> // Extend to other primitive data types that are not easily sorted (e.g. 
> long, float, int)
> {code}
> Note: This API will support multiple points (int[] k) for use in quantile 
> estimation of array data by interpolation of neighbouring values (see 
> STATISTICS-85).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NUMBERS-206) Selection API

2024-06-05 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852511#comment-17852511
 ] 

Alex Herbert commented on NUMBERS-206:
--

h1. Multiple Index Selection Results

Multi-index selection methods are based on single-pivot or dual-pivot 
partitioning. Pivots may be cached to allow bracketing searches, or indices may 
be passed to the quicksort to allow selective recursion to regions of interest.
h2. Double Data

The < relation does not impose a total order on all floating-point values. All 
methods respect the ordering imposed by {{{}Double#compare(double, double){}}}. 
-0.0 is treated as less than value 0.0; NaN is considered greater than any 
other value; and all NaN values are considered equal. This is implemented with 
several strategies:
 # Compare all values with an equivalent of Double.compare. Special routines 
were written to return a boolean value to avoid the use of Double.compare(x, y) 
> 0 or Double.compare(x, y) < 0. This did increase performance over use of 
Double.compare.
 # Preprocess data: Move NaN values to the end. Partition using the < operator. 
Detect and correct reordered signed zeros if a pivot is zero. Note if a pivot 
is not zero then the order of zeros below / above pivots is not important.
 # Preprocess data: Move NaN values to the end; count and strip signed zeros. 
Partition using the < operator. Postprocess data to restore the first 'count' 
zeros as -0.0.

Note that stripping the sign from zeros allows the pivot value to be used as a 
proxy for elements that match the pivot value. Other strategies instead must 
always use a[i] as the value if a[i] == pivotValue when moving the element to 
preserve the count of signed zeros (since a[i] could be -0.0 or 0.0).
h2. Methods 
||Method||Generation||Double Strategy||Index Strategy||Index 
Structure||Partition||Description||
|SortJDK| |3| | | |Sort the data using JDK's Arrays.sort.|
|SPH_QS16|0|1|Sparse pivot cache|Heap|binary|This is the method used in the 
Commons Math Percentile implementation with modifications for efficient double 
compare comparisons.|
|SP_QS16|1|1|Pivot cache|BitSet|binary|As per the Commons Math Percentile but 
switches the heap to a BitSet to store pivots.|
|SPN_QS16|1|2|Pivot cache|BitSet|binary|Pre-sorts NaN and allows uses of < and 
> comparators.|
|SBM_QS16|1|2|Pivot cache|BitSet|ternary|Pre-sorts NaN and allows uses of <, == 
and > comparators.|
|2SBM_SEQUENTIAL|2|2|Pivot ranges|IndexSet|ternary|Sorts indices and processes 
them sequentially in intervals [ka1, kb1], [ka2, kb2], etc. Joins indices with 
a minimum separation of 8 and can sort saturated ranges. Each interval is 
processed separately.|
|2SBM_PIVOT_CACHE|2|2|Pivot cache|IndexSet|ternary|Cache indices within [minK, 
maxK]. Partitions minK, then maxK, then internal indices.|
|2SBM_INDEX_SET|2|2|Pivot cache|IndexSet|ternary|Cache indices in [left, 
right]. Processes indices in input order.|
|ISP_ORDERED_KEYS|3|3|Index splitting|Sorted array|ternary|Partitioning uses an 
array of indices and markers into the array for the current interval [ka, kb]. 
This allows efficient bracketing for splitting the range. Cannot be easily 
abstracted to an interface.|
|ISP_SCANNING_KEY_SEARCHABLE_INTERVAL|3|3|Index splitting|Sorted 
array|ternary|Indices wrapped in a data structure which uses a linear scan of 
the entire range. Fingers are used to jump to the start point.|
|ISP_SEARCH_KEY_SEARCHABLE_INTERVAL|3|3|Index splitting|Sorted 
array|ternary|Indices wrapped in a data structure which uses a binary search of 
the entire range.|
|ISP_INDEX_SET|3|3|Index splitting|IndexSet|ternary|Indices stored in a data 
structure which uses linear scan of the range from the starting index.|
|ISP_COMPRESSED_INDEX_SET|3|3|Sparse index splitting|IndexSet|ternary|Indices 
stored in a data structure which uses linear scan of the range from the 
starting index. Index compression is 2x, i.e. every pair of indices is stored. 
Look-up of indices returns k or k+1 depending on the left/right search context. 
Compression means more regions must be processed as sorted to ensure 
uncompressed indices are correct. Compression reduces memory and increases 
search speed.|
|ISP_KEY_UPDATING_INTERVAL|3|3|Index splitting|Sorted array|ternary|Indices 
wrapped in a data structure with start and end markers. Search (either 
linear/binary) only within the markers.|
|ISP_INDEX_SET_UPDATING_INTERVAL|3|3|Index splitting|IndexSet|ternary|Indices 
stored in a data structure with start and end markers. Search uses linear scan 
of the range from the starting index.|
|ISP_INDEX_ITERATOR|3|3|Index iterator|Sorted array|ternary|Indices wrapped in 
a data structure which iterates over start and end markers of the current range 
[ka1, kb1], [ka2, kb2], etc. Joins indices within a separation of 2. The 
iterator is used during intr

[jira] [Commented] (NUMBERS-206) Selection API

2024-06-05 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852446#comment-17852446
 ] 

Alex Herbert commented on NUMBERS-206:
--

h1. Multiple Index Selection Background

Partitioning the same data multiple times can be made more efficient if 
information acquired during partitioning the first index is reused for the next 
index.

Any selection method that identifies k1 must only search [left, k1) or (k1, 
right] for k2, depending on where k2 is based:
{noformat}
Target: k1 k2
Search1: lr
Search2:k1r     {noformat}
If all indices k are sorted then the range to search for the next k is always 
smaller.

During quickselect a value is chosen to divide the data, this is named a pivot 
value. The final index of the pivot after partitioning may not be the desired 
target k. However it is a correctly sorted point in the data. Algorithms may 
choose to store pivots to allow bracketing subsequent searches. In the 
following example the partition of k1 stores pivots p. These can be used to 
bracket k2, k3. An alternative scheme where no pivots are stored (other than 
the preceding k) is shown for comparison:
{noformat}
Partition:
0--k1--k2--k3-N

Iteration 1:
0--k1--pp-p---N

Iteration 2:
   l---k2---r
or:l---k2-N

Iteration 3:
l--k3-r
or:l---k3-N
{noformat}
h2. Pivot Storage

A data structure may store all possible pivots (complete), or some pivots 
(sparse). An example of a complete pivot store is a BitSet. This requires 1-bit 
of memory for data of length n. When not many pivots are expected this can be a 
large overhead and searching the structure is relatively slow as much of the 
BitSet is empty.

An example of a sparse pivot store is a heap structure. This can be used to 
store pivots generated by a quickselect algorithm to a specific depth. The heap 
structure used in the Commons Math Percentile implementation used a depth of 
10. A larger heap requires more memory but will allow storing more pivots; at 
some point it is more efficient to use a BitSet.

Another example of a sparse pivot store is a one that has knowledge of the 
target indices k. It can selectively store pivots that bracket indices, and 
choose to ignore pivots whose closest neighbours are other pivots (i.e. a 
region not of interest). The complexity of the data structure increases with 
the number of k. For a single additional index (i.e. 2 in total) it is very 
efficient to store the closest bounding pivots to k2. 

Use of storage for pivots requires that pivots are stored during a quickselect 
algorithm. The pivots can then be searched for a suitable bracket for the next 
k. This requires two searches for left and right bracket bounds containing each 
additional target index. Indices k can be processed in any order. If the 
indices k are sorted then the pivot store only has to store pivots that bracket 
remaining indices.
h2. Index Storage

An alternative approach is not to store pivots, but to store the indices k. 
Each step of the quickselect algorithm will divide data around a pivot. The 
store of k can be used to determine which sides of the pivot are of interest 
and the quickselect can process that side:
{noformat}
  k1 k2k3
-p
p p---
 ---p --p- p--
 -p- --p--  ---p--
   p -p -p-
p{noformat}
A structure can be used that can scan a region and return true/false if the 
region is of interest. Or a structure that contains an interval [ka, kb] with 
multiple keys that can be divided at point p into two intervals [ka, kb1] and 
[ka1, kb]. Such a structure can only be divided m-1 times for m input indices. 
It thus has the same limit of two searches for the bounding indices (kb1 < p < 
ka1) of a point as the search within a pivot cache for pivots that bracket a n 
index: p1 < k < p2.

An alternative to recursive splitting of the entire interval [k1, kn] for n 
indices, is to provide an iterator over ranges of indices that are to be 
completely sorted [ka1, kb1], [ka2, kb2], etc. The iterator supplies the 
indices in ascending order. Any indices that are close can be joined to form a 
range that should be sorted. If indices are very dense this can effectively 
switch the algorithm from a selection to a sort when a target region is 
entirely filled by the current range. 
{noformat}
  lr
   lr
   lr
 lr
   

[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-06-04 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 6/4/24 2:29 PM:


Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the wait time value of 2000 milliseconds but the wait time value 
is effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

Here is the CPU usage breakdown by C function from the Visual Studio 2022 
diagnostic tool. The VM has 4 cores allocated, so you can see that one core is 
maxed out:

!image-2024-05-31-10-00-10-916.png!

 

Looking at the first code snippet above, it does seem that you need to have the 
stop timeout configured, in my case "--StopTimeout=30 ", to reproduce the issue.

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

Here is the CPU usage breakdown by C function from the Visual Studio 2022 
diagnostic tool. The VM has 4 cores allocated, so you can see that one core is 
maxed out:

!image-2024-05-31-10-00-10-916.png!

 

Looking at the first code snippet above, it does seem that you need to have the 
stop timeout configured, in my case "--StopTimeout=30 ", to reproduce the issue.

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png, 
> image-2024-05-31-10-00-10-916.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (COLLECTIONS-856) Document the interaction between PeekingIterator and FilteredIterator

2024-06-03 Thread Benjamin Confino (Jira)
Benjamin Confino created COLLECTIONS-856:


 Summary: Document the interaction between PeekingIterator and 
FilteredIterator
 Key: COLLECTIONS-856
 URL: https://issues.apache.org/jira/browse/COLLECTIONS-856
 Project: Commons Collections
  Issue Type: Improvement
  Components: Iterator
Affects Versions: 4.5.0-M1
Reporter: Benjamin Confino


If you have a FilteredIterator, call peek and get an element x. Then something 
modifies the state of that x. Then you call peek again FilteredIterator will 
return x even if x no longer passes the predicate.

I think this behaviour is correct. I would not expect the the state of an 
iterator to change unless I called a method like next(), and I would not expect 
an exception like ConcurrentModificationException when the collection has not 
changed.

However its an obscure edge case so it might be worth documenting it. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)

2024-06-03 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851778#comment-17851778
 ] 

Gary D. Gregory commented on CONFIGURATION-847:
---

PR merged to git master. Build in 
[https://repository.apache.org/content/repositories/snapshots/]

 

> Property with an empty string value are not processed in the current main 
> (2.11.0-snapshot)
> ---
>
> Key: CONFIGURATION-847
> URL: https://issues.apache.org/jira/browse/CONFIGURATION-847
> Project: Commons Configuration
>  Issue Type: Bug
>Affects Versions: Nightly Builds
>Reporter: Andrea Bollini
>Assignee: Gary D. Gregory
>Priority: Critical
> Fix For: 2.11.0
>
>
> I hit a side effect of the 
> https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved.
> {{Assuming that we have a property file as configuration source like that}}
> {{test.empty.property =}}
>  
> and that we will try to inject such property in a spring bean
> {{@Value("${test.empty.property"})}}
> {{private String emptyValue;}}
> {{ we will get an exception like:  BeanDefinitionStore Invalid bean 
> definition ... Could not resolve placeholder}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)

2024-06-03 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved CONFIGURATION-847.
---
Resolution: Fixed

> Property with an empty string value are not processed in the current main 
> (2.11.0-snapshot)
> ---
>
> Key: CONFIGURATION-847
> URL: https://issues.apache.org/jira/browse/CONFIGURATION-847
> Project: Commons Configuration
>  Issue Type: Bug
>Affects Versions: Nightly Builds
>Reporter: Andrea Bollini
>Priority: Critical
> Fix For: 2.11.0
>
>
> I hit a side effect of the 
> https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved.
> {{Assuming that we have a property file as configuration source like that}}
> {{test.empty.property =}}
>  
> and that we will try to inject such property in a spring bean
> {{@Value("${test.empty.property"})}}
> {{private String emptyValue;}}
> {{ we will get an exception like:  BeanDefinitionStore Invalid bean 
> definition ... Could not resolve placeholder}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)

2024-06-03 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CONFIGURATION-847:
--
Assignee: Gary D. Gregory

> Property with an empty string value are not processed in the current main 
> (2.11.0-snapshot)
> ---
>
> Key: CONFIGURATION-847
> URL: https://issues.apache.org/jira/browse/CONFIGURATION-847
> Project: Commons Configuration
>  Issue Type: Bug
>Affects Versions: Nightly Builds
>Reporter: Andrea Bollini
>Assignee: Gary D. Gregory
>Priority: Critical
> Fix For: 2.11.0
>
>
> I hit a side effect of the 
> https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved.
> {{Assuming that we have a property file as configuration source like that}}
> {{test.empty.property =}}
>  
> and that we will try to inject such property in a spring bean
> {{@Value("${test.empty.property"})}}
> {{private String emptyValue;}}
> {{ we will get an exception like:  BeanDefinitionStore Invalid bean 
> definition ... Could not resolve placeholder}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NUMBERS-206) Selection API

2024-06-03 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851712#comment-17851712
 ] 

Alex Herbert commented on NUMBERS-206:
--

h2. Single k selection

Note that development started with a method based on the Commons Math 
Percentile implementation of quickselect. This was changed to allow multiple 
indices to be passed in a single method call for the first generation functions 
which use a variety of partition algorithms.

This was then updated in a second development generation to add more options to 
configure the algorithm for multiple indices. Only one implementation was 
provided using the SBM partition algorithm.

The introselect based algorithms allow configuration the single/dual-pivot 
partition algorithm and the introspection. Typically introspection does not 
detect the need to switch to the alternative (stopper) algorithm. Here the 
stopper is the QuickselectAdaptive algorithm.

Various linear select algorithms are provided, broadly grouped into a, b, or c 
based on their implementation complexity.

Note: The algorithms are changed by using parameters harvested from the name. 
The default name is the base configuration for the method. This can be changed 
by appending parameters to the name as is done for some introselect 
implementations that can use FR sub-sampling.
||Name||Generation||Description||
|SortJDK| |Sort the data using JDK's Arrays.sort.|
|SPH|0|Single-pivot using a heap to cache pivots. This is the method used in 
the Commons Math Percentile implementation. The cache is null on a single index 
and ignored. Uses a single index input.|
|SP|1|Single-pivot using binary partitioning.|
|BM|1|Single-pivot using Bentley-McIlroy ternary partitioning. Main comparators 
are <=, >=.|
|SBM|1|Single-pivot using Sedgewick's Bentley-McIlroy ternary partitioning. 
Main comparators are <, >.|
|DP|1|Dual-pivot using Yaroslavskiy's original 1/div for pivot selection. The 
divisor (div) is initialised at 3 and is increased when the central partition 
is too large.|
|5DP|1|Dual-pivot using 2nd and 4th positions from 5 sorted points as the 
pivot. The 5 points are sampled evenly from the range.|
|2SBM|2|2nd generation single-pivot using Sedgewick's Bentley-McIlroy ternary 
partitioning. This is the most consistently performing single pivot partition 
method.|
|ISP|3|Single-pivot introselect. Pivot chosen using the dynamic strategy of BM. 
Above 40 the ninther strategy of a median of 3 medians-of-3 from 9 points; 
otherwise use a median-of-3.|
|IDP|3|Dual-pivot introselect. Pivot using 2 and 4 of 5 sorted points.|
|LSP|a|Median-of-medians using medians of 5. Sample is placed on the left side 
of the data.|
|Linear BFPRT IM|b|Median-of-medians using medians of 5. Sample is placed in 
the centre of the data.|
|Linear BFPRTA|b|Median-of-medians using medians of 5. Sample is placed in the 
centre of the data. The pivot is chosen using an adaptive k.|
|Linear RS IM|b|Repeated-step. Sample is placed in the centre of the data.|
|Linear RSA|b|Repeated-step. Sample is placed in the centre of the data. The 
pivot is chosen using an adaptive k.|
|QA_CF16|c|QuickselectAdaptive original method. Uses adaptive k for all steps.|
|QA|c|QuickselectAdaptive uses the original far step method but updates 
adaption of k for the far step to reduce the probability of placing the target 
in the larger partition.|
|QA_CF8|c|QuickselectAdaptive using the far step 2 method. Uses adaptive k for 
all steps.|
|FR| |Single-pivot using Floyd-Rivest sub-sampling when the range is >600.|
|KFR| |Kiwiel's dual-pivot using Floyd-Rivest sub-sampling when the range is 
>600.|
|ISP_SU600_PAIRED_KEYS|3|Single-pivot introselect. Pivot chosen using the 
dynamic strategy of BM. Uses FR sub-sampling when the range >600.|
|ISP_SU1200_PAIRED_KEYS|3|Single-pivot introselect. Pivot chosen using the 
dynamic strategy of BM. Uses FR sub-sampling when the range >1200.|
|ISP_SU1200_PAIRED_KEYS_CF2|3|Single-pivot introselect. Pivot chosen using the 
dynamic strategy of BM. Uses FR sub-sampling with a random sample when the 
range >1200.|
|SELECT| |Commons numbers arrays selection implementation. Uses FR sub-sampling 
and falls back to QuickselectAdaptive if FR fails to hit the expected margins.|

Note: The PAIRED_KEYS name is required to activate a method that uses FR 
sub-sampling. The default method uses a collections of keys with only 1 key. 
This method is for selecting multiple keys together. As such it is not 
optimised for FR sub-sampling which must target a single key.
h2. Results

Warning: Results may vary by machine. The current code has settings that 
perform well over several machines. One key development goal was that speed 
should not be obviously slow; rather than the optimum.

Tested on MacOSX 14.5; M2 Max cpu with 96Gb RAM; with Eclipse Adoptium JDK 
21.0.3

[jira] [Commented] (NET-731) FTPSClient no longer supports fileTransferMode (eg DEFLATE)

2024-06-03 Thread PJ Fanning (Jira)


[ 
https://issues.apache.org/jira/browse/NET-731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851675#comment-17851675
 ] 

PJ Fanning commented on NET-731:


[https://github.com/apache/commons-net/pull/90] was first release in 
commons-net 3.10.0.

The new openDataSecureConnection method is kind of copy/paste of the FTPClient 
method.

{noformat}
_openDataConnection_
{noformat}

But in the copy/paste, the call to `wrapOnDeflate` was dropped.

> FTPSClient no longer supports fileTransferMode (eg DEFLATE)
> ---
>
> Key: NET-731
> URL: https://issues.apache.org/jira/browse/NET-731
> Project: Commons Net
>  Issue Type: Task
>  Components: FTP
>Reporter: PJ Fanning
>Priority: Major
>
> The new openDataSecureConnection method in FTPSClient does not support 
> fileTransferMode (eg DEFLATE).
> [https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[]
>  
> |https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]
>  
> The FTPSClient code used to delegate to FTPClient 
> {noformat}
> _openDataConnection_
> {noformat}
> method.
> [https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]
> This method supports `wrapOnDeflate` while openDataSecureConnection does not.
> I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
> Apache Pekko workaround for the NET-718, I spotted the diff.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NET-731) FTPSClient no longer supports fileTransferMode (eg DEFLATE)

2024-06-03 Thread PJ Fanning (Jira)


 [ 
https://issues.apache.org/jira/browse/NET-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PJ Fanning updated NET-731:
---
Description: 
The new openDataSecureConnection method in FTPSClient does not support 
fileTransferMode (eg DEFLATE).

[https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[]
 
|https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]

 

The FTPSClient code used to delegate to FTPClient 
{noformat}
_openDataConnection_
{noformat}
method.

[https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]

This method supports `wrapOnDeflate` while openDataSecureConnection does not.

I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
Apache Pekko workaround for the NET-718, I spotted the diff.

 

  was:
The new openDataSecureConnection method in FTPSClient does not support 
fileTransferMode (eg DEFLATE).

[https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[]
 
|https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]

 

The FTPSClient code used to delegate to FTPClient `_openDataConnection_`.

[https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]

This method supports `wrapOnDeflate` while openDataSecureConnection does not.

I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
Apache Pekko workaround for the NET-718, I spotted the diff.

 


> FTPSClient no longer supports fileTransferMode (eg DEFLATE)
> ---
>
> Key: NET-731
> URL: https://issues.apache.org/jira/browse/NET-731
> Project: Commons Net
>  Issue Type: Task
>  Components: FTP
>Reporter: PJ Fanning
>Priority: Major
>
> The new openDataSecureConnection method in FTPSClient does not support 
> fileTransferMode (eg DEFLATE).
> [https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[]
>  
> |https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]
>  
> The FTPSClient code used to delegate to FTPClient 
> {noformat}
> _openDataConnection_
> {noformat}
> method.
> [https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]
> This method supports `wrapOnDeflate` while openDataSecureConnection does not.
> I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
> Apache Pekko workaround for the NET-718, I spotted the diff.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NET-731) FTPSClient no longer supports fileTransferMode (eg DEFLATE)

2024-06-03 Thread PJ Fanning (Jira)


 [ 
https://issues.apache.org/jira/browse/NET-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PJ Fanning updated NET-731:
---
Description: 
The new openDataSecureConnection method in FTPSClient does not support 
fileTransferMode (eg DEFLATE).

[https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[]
 
|https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]

 

The FTPSClient code used to delegate to FTPClient `_openDataConnection_`.

[https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]

This method supports `wrapOnDeflate` while openDataSecureConnection does not.

I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
Apache Pekko workaround for the NET-718, I spotted the diff.

 

  was:
The new openDataSecureConnection method in FTPSClient does not support 
fileTransferMode (eg DEFLATE).

https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[
 
|https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]

 

The FTPSClient code used to delegate to FTPClient _openDataConnection_

[https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]

This method supports `wrapOnDeflate` while openDataSecureConnection does not.

I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
Apache Pekko workaround for the NET-718, I spotted the diff.

 


> FTPSClient no longer supports fileTransferMode (eg DEFLATE)
> ---
>
> Key: NET-731
> URL: https://issues.apache.org/jira/browse/NET-731
> Project: Commons Net
>  Issue Type: Task
>  Components: FTP
>Reporter: PJ Fanning
>Priority: Major
>
> The new openDataSecureConnection method in FTPSClient does not support 
> fileTransferMode (eg DEFLATE).
> [https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[]
>  
> |https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]
>  
> The FTPSClient code used to delegate to FTPClient `_openDataConnection_`.
> [https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]
> This method supports `wrapOnDeflate` while openDataSecureConnection does not.
> I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
> Apache Pekko workaround for the NET-718, I spotted the diff.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NET-731) FTPSClient no longer supports fileTransferMode (eg DEFLATE)

2024-06-03 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/NET-731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851668#comment-17851668
 ] 

Gary D. Gregory commented on NET-731:
-

Hello [~fanningpj] 

Thank you for your report.

What version are you using compared to the previous behavior?

 

> FTPSClient no longer supports fileTransferMode (eg DEFLATE)
> ---
>
> Key: NET-731
> URL: https://issues.apache.org/jira/browse/NET-731
> Project: Commons Net
>  Issue Type: Task
>  Components: FTP
>Reporter: PJ Fanning
>Priority: Major
>
> The new openDataSecureConnection method in FTPSClient does not support 
> fileTransferMode (eg DEFLATE).
> https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[
>  
> |https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]
>  
> The FTPSClient code used to delegate to FTPClient _openDataConnection_
> [https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]
> This method supports `wrapOnDeflate` while openDataSecureConnection does not.
> I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
> Apache Pekko workaround for the NET-718, I spotted the diff.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NUMBERS-206) Selection API

2024-06-03 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851646#comment-17851646
 ] 

Alex Herbert commented on NUMBERS-206:
--

h1. Selection Background

The following introduces the methods that have been benchmarked.
h2. Quickselect

[Quickselect (Wikipedia)|https://en.wikipedia.org/wiki/Quickselect]

[Introselect (Wikipedia)|https://en.wikipedia.org/wiki/Introselect]

Select: arrange the data so that index k corresponds to the value of the index 
in a fully sorted array:
{noformat}
data[i < k] <= data[k] <= data[k < i]{noformat}
Selection is closely related to sorting. Here we introduce quicksort and the 
related quickselect.

Quicksort divides an array around a pivot point that corresponds to the correct 
value in the sorted array. The two partitions either side of the pivot are 
recursively processed in the same way. This can be done in-place. At small 
lengths the selection of a pivot value and rearrangement of the data can be 
stopped and the sort switched to an insertion sort.

Quickselect is related to quicksort. It was introduced by Hoare (1961). It 
partitions the data using a pivot value but only recurses into the partition 
that contains the index of interest k. At small lengths the selection can use 
an insertion sort, or another method such as heap select to identify k. Heap 
select builds a min/max heap of size m = k-l+1 or m = r-k+1 at one end of the 
array and scans the rest of the data inserting elements if they are 
smaller/larger than the current top of the heap. When finished the top of the 
heap can be placed in the correct location k.

A single-pivot quickselect should ideally divide data into 2 equal partitions 
using a pivot value p. This is repeated in the partition that contains the 
index to select:
{noformat}
l---kp---r
l-p-k---r
   lk--pr
   l--p-k-r
   lk-r

{noformat}
Order(n + n/2 + n/4 + n/8 + ...) = Order( n )

The number of quickselect partitions is expected to be log2( n ). The choice of 
the pivot value can be random, or based on sampling of the range such as using 
the median of a number of values. A sampling approach will be more likely to 
find a value in the middle of the data but will be more expensive to compute.

A typical pivot selection uses a median-of-3 strategy. This is vulnerable to 
median-of-3 killer sequences which cause the algorithm to only partition 2 
elements per iteration and result in Order( n^2 ) worst case performance. This 
can be changed by changing the pivoting strategy. But it does not remove the 
fact that the algorithm does not monitor its own performance during 
partitioning. This is known as introspection and was introduced by Musser in 
Introsort/Introselect.

[Musser (1999) Introspective Sorting and Selection Algorithms. Software: 
Practice and Experience 27, 
983-993|https://doi.org/10.1002/(SICI)1097-024X(199708)27:8%3C983::AID-SPE117%3E3.0.CO;2-%23]

The recursion depth of the quickselect can vary with the data and pivot 
selection method. If pivot selection is poor then quickselect recurses too far 
and performance becomes quadratic. It is easy to determine if quicksort/select 
is not converging as expected by checking the recursion depth against the 
expected number, or combined size, of ideal partitions. In this case the 
selection can change to a different method. This idea of monitoring the 
quickselect behaviour during execution is used in introsort/introselect (Musser 
1999). In introsort the quicksort will be changed to a heapsort (Order(n log 
n)) if quicksort fails to converge. For introselect the quickselect is changed 
to a heapselect, or a linearselect algorithm. The switch to a stopper algorithm 
provides a stable worst case performance for data that is unsuitable for 
quickselect.

Partitioning around a pivot can use a binary (2-state) partition with {{{}<, 
>{}}}. This suffers performance degradation when there are many repeated 
elements. It can be fixed by using a ternary (3-state) partition with {{<, ==, 
>}} where all values equal to the pivot value are collected in the centre.
h2. Median-of-Medians

[Median-of-medians (Wikipedia)|https://en.wikipedia.org/wiki/Median_of_medians]

Quickselect will on average remove half of the data each iteration. But it may 
not always achieve this and performance will degrade. An alternative algorithm 
is one that chooses a pivot which will ensure a set fraction of the data is 
removed each iteration. This ensures linear runtime. However choosing the pivot 
is expensive and in practice these methods are slower than quickselect (with a 
fast pivot choice) on most data.

The median-of-medians algorithm chooses a pivot by inspection of all the data. 
The data is broken into columns of length m. Each column is partially 

[jira] [Created] (NET-731) FTPSClient no longer supports fileTransferMode (eg DEFLATE)

2024-06-03 Thread PJ Fanning (Jira)
PJ Fanning created NET-731:
--

 Summary: FTPSClient no longer supports fileTransferMode (eg 
DEFLATE)
 Key: NET-731
 URL: https://issues.apache.org/jira/browse/NET-731
 Project: Commons Net
  Issue Type: Task
  Components: FTP
Reporter: PJ Fanning


The new openDataSecureConnection method in FTPSClient does not support 
fileTransferMode (eg DEFLATE).

https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[
 
|https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9]

 

The FTPSClient code used to delegate to FTPClient _openDataConnection_

[https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696]

This method supports `wrapOnDeflate` while openDataSecureConnection does not.

I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an 
Apache Pekko workaround for the NET-718, I spotted the diff.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NUMBERS-206) Selection API

2024-06-03 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/NUMBERS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851609#comment-17851609
 ] 

Alex Herbert commented on NUMBERS-206:
--

I have created and tested many implementation for selection. Benchmarks were 
performed using JMH on input double array data of various lengths. Typically 
data was created using the Bentley and McIlroy (BM) test suite:
{noformat}
Bentley and McIlroy (1993)
Engineering a sort function.
Software, practice and experience, Vol.23(11), 1249–1265.
{noformat}
The data consists of various distributions of length n, using a seed of m. Each 
power of 2 in [1, 2n) is used for m.
||Distribution||Description||
|Sawtooth|Ascending data from 0 to m, that repeats|
|Random|Uniform random data in [0, m]|
|Stagger|Interlaced ascending sequences generated using (i * m + i) % n|
|Plateau|Ascending data from 0 to m, then constant at m|
|Shuffle|Two interlaced ascending sequences. The second sequence is chosen with 
probability 1 / m.|

Each distribution is modified using:
||Modification||Description||
|Copy|Copy the data|
|Reverse|Reverse [0, n)|
|Reverse front|Reverse [0, n/2)|
|Reverse back|Reverse [n/2, n)|
|Sort|Sort the data|
|Dither|Add i % 5 to element i|

As an addition to the BM test suite, the basic distributions are modified per 
iteration using an offset. This effectively rotates the sequence by [0, n).

Other distributions were also added to the test suite. These can be manually 
specified. However these additional distributions do not change the relative 
ranking of selection algorithms. The results shown will be total runtimes 
across the BM test suite data unless otherwise stated.

The data generation allows a range to be specified for n. For example lengths 
of {1023, 1023, 1025} will span a power of 2 and trigger generation of 
distributions with m at just below and just above n; this creates in total 966 
input array sequences. Due to the use of m as powers of 2 up to 2n, larger 
lengths generate more data sequences. This prohibits use of the BM data for 
very large lengths unless subsets are used.

> Selection API
> -
>
> Key: NUMBERS-206
> URL: https://issues.apache.org/jira/browse/NUMBERS-206
> Project: Commons Numbers
>  Issue Type: New Feature
>  Components: arrays
>Reporter: Alex Herbert
>Priority: Major
>
> Create a selection API to select the k-th largest element in an array. This 
> places at k the same value that would be at k in a fully sorted array.
> {code:java}
> public final class Selection {
> public static void select(double[] a, int k);
> public static void select(double[] a, int from, int to, int k);
> public static void select(double[] a, int[] k);
> public static void select(double[] a, int from, int to, int[] k);
> // Extend to other primitive data types that are not easily sorted (e.g. 
> long, float, int)
> {code}
> Note: This API will support multiple points (int[] k) for use in quantile 
> estimation of array data by interpolation of neighbouring values (see 
> STATISTICS-85).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NUMBERS-206) Selection API

2024-06-03 Thread Alex Herbert (Jira)
Alex Herbert created NUMBERS-206:


 Summary: Selection API
 Key: NUMBERS-206
 URL: https://issues.apache.org/jira/browse/NUMBERS-206
 Project: Commons Numbers
  Issue Type: New Feature
  Components: arrays
Reporter: Alex Herbert


Create a selection API to select the k-th largest element in an array. This 
places at k the same value that would be at k in a fully sorted array.
{code:java}
public final class Selection {
public static void select(double[] a, int k);
public static void select(double[] a, int from, int to, int k);
public static void select(double[] a, int[] k);
public static void select(double[] a, int from, int to, int[] k);
// Extend to other primitive data types that are not easily sorted (e.g. 
long, float, int)
{code}

Note: This API will support multiple points (int[] k) for use in quantile 
estimation of array data by interpolation of neighbouring values (see 
STATISTICS-85).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (JCS-235) Thread leakage in JCS 3.1 with primary and secondary server configuration

2024-06-03 Thread Thomas Vandahl (Jira)


 [ 
https://issues.apache.org/jira/browse/JCS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Vandahl resolved JCS-235.

Fix Version/s: jcs-3.2.1
   Resolution: Cannot Reproduce

> Thread leakage in JCS 3.1 with primary and secondary server configuration
> -
>
> Key: JCS-235
> URL: https://issues.apache.org/jira/browse/JCS-235
> Project: Commons JCS
>  Issue Type: Bug
>  Components: RMI Remote Cache
>Affects Versions: jcs-3.1
>Reporter: Amol D
>Priority: Major
> Fix For: jcs-3.2.1
>
> Attachments: JCS_3.1_HeapDump.log, client-remoet-cache.ccf, 
> primaryserver-remote-cache.ccf, secondaryserver-remote-cache.ccf
>
>
> We are using Apache JCS as a primary cache server in our application. The 
> version we are using is apache-commons-jcs3 3.1. On production environment we 
> have observed that this specific version is having a thread leakage issue 
> where on the production servers, the thread count is increasing rapidly and 
> reaching 2 threads, and the system crashes due to the OutOfMemory error. 
> It is observed while degrading the JCS version from 3.1 to 3.0. The problem 
> is no longer reproducible. With the JCS 3.1 version we have also tried 
> implementing thread pooling, but it did not solve the problem . 
> Steps to reproduce -
> 1) JCS configured to have primary and failover server
> Please refer cache.ccf configurations attached
> Check thread count via command
> ps -o pid,comm,user,thcount -p 
> 2) Restart Primary server
> After certain usage by JCS client check the thread count via below command
> ps -o pid,comm,user,thcount -p 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (JCS-240) Remote cache appears incompatible with JDK 8u241 and above

2024-06-03 Thread Thomas Vandahl (Jira)


 [ 
https://issues.apache.org/jira/browse/JCS-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Vandahl resolved JCS-240.

Fix Version/s: jcs-3.2.1
   Resolution: Fixed

Fixed in release 3.2.1

> Remote cache appears incompatible with JDK 8u241 and above
> --
>
> Key: JCS-240
> URL: https://issues.apache.org/jira/browse/JCS-240
> Project: Commons JCS
>  Issue Type: Bug
>  Components: RMI Remote Cache
>Affects Versions: jcs-3.2
>Reporter: Greg Parmiter
>Assignee: Thomas Vandahl
>Priority: Critical
> Fix For: jcs-3.2.1
>
>
> After setting up remote RMI cache, with the central cache service running in 
> the same app server/JVM (JDK version 11) as ~4 clients, there were 
> intermittent put and remove failures, as well as issues similar to JCS-237, 
> JCS-238, and JCS-239.
> While investigating the intermittent put/remove issues, the remote server log 
> showed the following-
> {code:java}
> Error while running event from Queue: PutEvent for key: key1 value: null. 
> Retrying...
> java.rmi.RemoteException: Method is not Remote: interface 
> org.apache.commons.jcs3.engine.behavior.ICacheListener::public abstract void 
> org.apache.commons.jcs3.engine.behavior.ICacheListener.handlePut(org.apache.commons.jcs3.engine.behavior.ICacheElement)
>  throws java.io.IOException
>         at 
> java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:214)
>  ~[?:?]
>         at 
> java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:162)
>  ~[?:?]
>         at com.sun.proxy.$Proxy2195.handlePut(Unknown Source) ~[?:?]
>         at 
> org.apache.commons.jcs3.engine.AbstractCacheEventQueue$PutEvent.doRun(AbstractCacheEventQueue.java:277)
>  ~[commons-jcs3-core-3.2.jar:3.2]
>         at 
> org.apache.commons.jcs3.engine.AbstractCacheEventQueue$AbstractCacheEvent.run(AbstractCacheEventQueue.java:216)
>  ~[commons-jcs3-core-3.2.jar:3.2]
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]{code}
> This pointed to an incorrect listener being registered from the client(s), 
> but when debugging, I saw that the clients were registering the correct 
> listener type, which implements the Remote interface-
> {code:java}
> listener =
>  RemoteCacheListener:
>  AbstractRemoteCacheListener:
>  RemoteHost = localhost:1101
>  ListenerId = 4 {code}
> I understand there's a class hierarchy in place where the RemoveCacheListener 
> implements the IRemoteCacheListener interface, which is a sub-interface of 
> Remote, but therein lies the issue. When the event queue is being processed 
> at runtime and the ICacheListener is being proxied to the appropriate 
> concrete class, the JVM does not recognize the sub-interface and thus, the 
> Remote interface is not seen. *The appears to occur in JDK 11 JVMs but not 
> JDK 8. In other words, JCS 3.2 remote cache does not appear to work properly 
> in JDK 11 (and higher?) runtimes.*
> I "resolved" this by implementing the Remote interface in the ICacheListener, 
> and this actually resolved JCS-237, JCS-238, and JCS-239. That said, it's 
> likely not the optimal solution here. Another option-
> Create another abstract class, AbstractRemoteCacheEventQueue, that passes the 
> IRemoteCacheListener rather than the base ICacheListener to the applicable 
> listener. This would obviously have to be added into the appropriate place(s) 
> in the hierarchy (i.e., RemoteCacheListener, etc.).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IO-831) Add getInputStream() for 'https' & 'http' in URIOrigin

2024-06-02 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved IO-831.

Fix Version/s: 2.16.2
   Resolution: Fixed

> Add getInputStream() for 'https' & 'http' in URIOrigin
> --
>
> Key: IO-831
> URL: https://issues.apache.org/jira/browse/IO-831
> Project: Commons IO
>  Issue Type: Bug
>Reporter: Elliotte Rusty Harold
>Priority: Major
> Fix For: 2.16.2
>
>
> I think file URLs might work but http/https URLs, much more common, don't. 
> I'm not yet sure if this can be fixed without changing the API.
> @Test
> public void testReadFromURL() throws URISyntaxException, IOException {
> final URIOrigin origin = new URIOrigin(new 
> URI("https://www.yahoo.com";));
> try (final InputStream in = origin.getInputStream()) {
> assertNotEquals(-1, in.read());
> }
> }
> java.nio.file.FileSystemNotFoundException: Provider "https" not installed
>   at java.nio.file.Paths.get(Paths.java:147)
>   at 
> org.apache.commons.io.build.AbstractOrigin$URIOrigin.getPath(AbstractOrigin.java:402)
>   at 
> org.apache.commons.io.build.AbstractOrigin.getInputStream(AbstractOrigin.java:540)
>   at 
> org.apache.commons.io.build.URIOriginTest.testReadFromURL(URIOriginTest.java:47)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at java.util.ArrayList.forEach(ArrayList.java:1257)
>   at java.util.ArrayList.forEach(ArrayList.java:1257)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IO-831) Add getInputStream() for 'https' & 'http' in URIOrigin

2024-06-02 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/IO-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated IO-831:
---
Summary: Add getInputStream() for 'https' & 'http' in URIOrigin  (was: http 
URI origins don't work)

> Add getInputStream() for 'https' & 'http' in URIOrigin
> --
>
> Key: IO-831
> URL: https://issues.apache.org/jira/browse/IO-831
> Project: Commons IO
>  Issue Type: Bug
>Reporter: Elliotte Rusty Harold
>Priority: Major
>
> I think file URLs might work but http/https URLs, much more common, don't. 
> I'm not yet sure if this can be fixed without changing the API.
> @Test
> public void testReadFromURL() throws URISyntaxException, IOException {
> final URIOrigin origin = new URIOrigin(new 
> URI("https://www.yahoo.com";));
> try (final InputStream in = origin.getInputStream()) {
> assertNotEquals(-1, in.read());
> }
> }
> java.nio.file.FileSystemNotFoundException: Provider "https" not installed
>   at java.nio.file.Paths.get(Paths.java:147)
>   at 
> org.apache.commons.io.build.AbstractOrigin$URIOrigin.getPath(AbstractOrigin.java:402)
>   at 
> org.apache.commons.io.build.AbstractOrigin.getInputStream(AbstractOrigin.java:540)
>   at 
> org.apache.commons.io.build.URIOriginTest.testReadFromURL(URIOriginTest.java:47)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at java.util.ArrayList.forEach(ArrayList.java:1257)
>   at java.util.ArrayList.forEach(ArrayList.java:1257)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)

2024-05-31 Thread Andrea Bollini (Jira)
Andrea Bollini created CONFIGURATION-847:


 Summary: Property with an empty string value are not processed in 
the current main (2.11.0-snapshot)
 Key: CONFIGURATION-847
 URL: https://issues.apache.org/jira/browse/CONFIGURATION-847
 Project: Commons Configuration
  Issue Type: Bug
Affects Versions: Nightly Builds
Reporter: Andrea Bollini
 Fix For: 2.11.0


I hit a side effect of the 
https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved.

{{Assuming that we have a property file as configuration source like that}}

{{test.empty.property =}}

 

and that we will try to inject such property in a spring bean


{{@Value("${test.empty.property"})}}
{{private String emptyValue;}}

{{ we will get an exception like:  BeanDefinitionStore Invalid bean definition 
... Could not resolve placeholder}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CLI-335) Defining Default Properties documentation has errors.

2024-05-31 Thread Claude Warren (Jira)


[ 
https://issues.apache.org/jira/browse/CLI-335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851169#comment-17851169
 ] 

Claude Warren commented on CLI-335:
---

I plan on doing so in the next few weeks.  I am tied up next week with 
community over code and then I'll need to spend some time at my $ job to catch 
up there.   If someone else does not catch this before the end of the month I 
should be able to submit a pull by then.

> Defining Default Properties documentation has errors.
> -
>
> Key: CLI-335
> URL: https://issues.apache.org/jira/browse/CLI-335
> Project: Commons CLI
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.8.0
>Reporter: Claude Warren
>Priority: Major
>
>   https://commons.apache.org/proper/commons-cli/properties.html  specifically 
> links to the deprecated OptionBuilder class.  It should reference the 
> Option.Builder (note the dot) class.
> In addition there are methods defined in Option.Builder that are not 
> described in the properties document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CLI-335) Defining Default Properties documentation has errors.

2024-05-31 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CLI-335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17851167#comment-17851167
 ] 

Gary D. Gregory commented on CLI-335:
-

[~claude] 

Thank you for finding this hole in our documentation.

Would you please provide a PR?

> Defining Default Properties documentation has errors.
> -
>
> Key: CLI-335
> URL: https://issues.apache.org/jira/browse/CLI-335
> Project: Commons CLI
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.8.0
>Reporter: Claude Warren
>Priority: Major
>
>   https://commons.apache.org/proper/commons-cli/properties.html  specifically 
> links to the deprecated OptionBuilder class.  It should reference the 
> Option.Builder (note the dot) class.
> In addition there are methods defined in Option.Builder that are not 
> described in the properties document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NET-709) IMAP Memory considerations with large ‘FETCH’ sizes.

2024-05-31 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/NET-709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated NET-709:

Fix Version/s: 3.11.1
   (was: 3.11.0)

> IMAP Memory considerations with large ‘FETCH’ sizes.
> 
>
> Key: NET-709
> URL: https://issues.apache.org/jira/browse/NET-709
> Project: Commons Net
>  Issue Type: Improvement
>  Components: IMAP
>Affects Versions: 3.8.0
>Reporter: Anders
>Priority: Minor
>  Labels: IMAP, buffer, chunking, large, literal, memory, partial
> Fix For: 3.11.1
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> h2. *IMAP Memory considerations with large ‘FETCH’ sizes.*
>  
> The following comments concern classes in the 
> [org.apache.common.net.imap|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/package-summary.html]
>  package.
>  
> Consider the following imap ‘fetch’ exchange between a client (>) and server 
> (<):
> {{> A654 FETCH 1:2 (BODY[TEXT])}}
> {{{}< * 1 FETCH (BODY[TEXT] {*}{{*}{*}8000{*}{*}}{*}\r\n{}}}{{{}…{}}}
> {{< * 2 FETCH …}}
> {{< A654 OK FETCH completed}}
>  
> The first untagged response (* 1 FETCH …) contains a literal \{8000} or 
> 80MB.
>  
> After reviewing the 
> [source|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298],
>  it is my understanding, the entire 80MB sequence of data will be read into 
> Java memory even when using  
> ‘[IMAPChunkListener|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/IMAP.IMAPChunkListener.html]’.
>  According the the documentation: 
>  
> {quote}Implement this interface and register it via 
> [IMAP.setChunkListener(IMAPChunkListener)|https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/imap/IMAP.html#setChunkListener-org.apache.commons.net.imap.IMAP.IMAPChunkListener-]
>  in order to get access to multi-line partial command responses. Useful when 
> processing large FETCH responses.
> {quote}
>  
> It is apparent the partial fetch response is read in full (80MB) before 
> invoking the ‘IMAPChunkListener’ and then discarding the read lines (freeing 
> up memory).
>  
> Back to the example:
> > A654 FETCH 1:2 (BODY[TEXT])
> < * 1 FETCH (BODY[TEXT] \{8000}\r\n
> *{color:#ff}…. <— read in full into memory then discarded after calling 
> IMAPChunkListener{color}*
> < * 2 FETCH (BODY[TEXT] \{250}\r\n
> {color:#ff}*…. <— read in full into memory then discarded after calling 
> IMAPChunkListener*{color}
> < A654 OK FETCH completed
>  
> Above, you can see the chunk listener is good for each individual partial 
> fetch response but does not prevent a large partial from being loaded into 
> memory.
>  
> Let’s review the 
> [code|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298]:
>  
> [ 
> 296|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l296]
>                  int literalCount = IMAPReply.literalCount(line);
> {color:#ff}Above counts the size of the literal, in our case 8000 or 
> 80MB (for the first partial fetch response).{color}
>  
>  
> [ 
> 297|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l297]
>                  final boolean isMultiLine = literalCount >= 0;
> [ 
> 298|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l298]
>                  while (literalCount >= 0) {
> [ 
> 299|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l299]
>                      line=_reader.readLine();
> [ 
> 300|https://gitbox.apache.org/repos/asf?p=commons-net.git;a=blob;f=src/main/java/org/apache/commons/net/imap/IMAP.java;h=d97f1073d8b97545d0a063c6832fe55c116166e2;hb=HEAD#l300]
>                      if (line == null)  
> {                                  throw new EOFException("Connection closed 
> without indication.");   }

[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-31 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 5/31/24 1:43 PM:
-

Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

Here is the CPU usage breakdown by C function from the Visual Studio 2022 
diagnostic tool. The VM has 4 cores allocated, so you can see that one core is 
maxed out:

!image-2024-05-31-10-00-10-916.png!

 

Looking at the first code snippet above, it does seem that you need to have the 
stop timeout configured, in my case "--StopTimeout=30 ", to reproduce the issue.

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

Here is the CPU usage breakdown by C function from the Visual Studio 2022 
diagnostic tool. The VM has 4 cores allocated, so you can see that one core is 
maxed out:

!image-2024-05-31-10-00-10-916.png!

 

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
>     URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png, 
> image-2024-05-31-10-00-10-916.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-31 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 5/31/24 1:04 PM:
-

Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

Here is the CPU usage breakdown by C function from the Visual Studio 2022 
diagnostic tool. The VM has 4 cores allocated, so you can see that one core is 
maxed out:

!image-2024-05-31-10-00-10-916.png!

 

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

Here is the CPU usage breakdown by C function from the Visual Studio 2022 
diagnostic tool, the VM has 4 cores allocated, so you can see that one core is 
maxed out:

!image-2024-05-31-10-00-10-916.png!

 

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png, 
> image-2024-05-31-10-00-10-916.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-31 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 5/31/24 1:01 PM:
-

Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

Here is the CPU usage breakdown by C function from the Visual Studio 2022 
diagnostic tool, the VM has 4 cores allocated, so you can see that one core is 
maxed out:

!image-2024-05-31-10-00-10-916.png!

 

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png, 
> image-2024-05-31-10-00-10-916.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COMPRESS-675) Regression in pack200's Archive class -- underlying InputStream is now closed

2024-05-30 Thread Tim Allison (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850690#comment-17850690
 ] 

Tim Allison commented on COMPRESS-675:
--

Where tomorrow in open source time = two weeks by most calendars...

We upgraded with the last release on Tika, and the problematic file is fixed. 
We haven't run full regression tests, but I'll let you know if I find anything.

Thank you!

> Regression in pack200's Archive class -- underlying InputStream is now closed 
> --
>
> Key: COMPRESS-675
>     URL: https://issues.apache.org/jira/browse/COMPRESS-675
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.26.0, 1.26.1
>Reporter: Tim Allison
>Assignee: Gary D. Gregory
>Priority: Major
> Fix For: 1.26.2
>
>
> On TIKA-4221, on our recent regression tests, we noticed a change in the 
> behavior of Pack200's Archive class. In 1.26.0, the unwrapping of the 
> FilterInputStreams 
> (https://github.com/apache/commons-compress/blob/68cd2e7fb488b4ad8a9fdc81cae97ae6e8248ea5/src/main/java/org/apache/commons/compress/harmony/unpack200/Pack200UnpackerAdapter.java#L66)
>  effectively disables CloseShieldInputStreams, which means that the 
> underlying stream is closed after the parse.
> This causes problems when a Pack200 file is inside of an ArchiveInputStream.
> Not sure of the best solution. There's a triggering file on the Tika issue. 
> We can implement a crude workaround until this is fixed in commons-compress.
> Thank you!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (CONFIGURATION-846) Unable to load multivalued configurations into Spring using ConfigurationPropertySource

2024-05-30 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CONFIGURATION-846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved CONFIGURATION-846.
---
Fix Version/s: 2.10.2
   Resolution: Fixed

> Unable to load multivalued configurations into Spring using 
> ConfigurationPropertySource
> ---
>
> Key: CONFIGURATION-846
> URL: https://issues.apache.org/jira/browse/CONFIGURATION-846
> Project: Commons Configuration
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Tim Donohue
>Priority: Minor
> Fix For: 2.10.2
>
>
> We've run into an odd bug when using Commons Configuration v2 + Spring Boot 
> which I _believe_ is caused by changes in the PR 
> [https://github.com/apache/commons-configuration/pull/309] to address 
> https://issues.apache.org/jira/browse/CONFIGURATION-834.   
> During a routine upgrade from Commons Configuration v2.9.0 to v2.10.1, we 
> discovered that our multivalued configurations (i.e. an array or list of 
> values) were only loading the *first value* into Spring.  In other words, it 
> seems to no longer be possible to load multivalued configurations into Spring 
> Beans via something like this:
> {{@Value("${some.multivalued.prop}")}}
> {{String[] myMultivaluedVariable;}}
> I could be wrong, but I _believe_ it may be caused by the [change from  
> `getProperty()` to `getString()` in PR 
> 309|https://github.com/apache/commons-configuration/pull/309/files#diff-2f481434a16d50ce9df3af48f9e72fc8872050b0e8d1614fcd7420a8779db283R52],
>  because `getString()` is [documented to only return the *first value* in a 
> list of 
> values|https://commons.apache.org/proper/commons-configuration/userguide/howto_basicfeatures.html#List_handling]
> {quote}Of interest is also the last line of the example fragment. Here the 
> `getString()` method is called for a property that has multiple values. This 
> call will return the first value of the list.
> {quote}
> I don't know of the proper solution to this issue. But I can confirm that 
> v2.9.0 works properly for multivalued configurations, but both v2.10.0 and 
> v2.10.1 do not (in both those versions we are seeing only the first value 
> loaded into Spring for multivalued configurations).
> For our purposes, we are looking to create a custom 
> ConfigurationPropertySource to workaround this issue in our codebase.  
> However, ideally, it'd be better to ensure the default 
> ConfigurationPropertySource is still able to handle multi-valued 
> configurations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-05-29 Thread Claude Warren (Jira)


[ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850493#comment-17850493
 ] 

Claude Warren commented on COLLECTIONS-855:
---

I agree.  I was just pointing out that it sorta falls into that space where
we have to think about if the change would break something in the wild.  Do
we have a marker for such things?  An annotation of something like that to
inform other developers going forward?

On Wed, May 29, 2024 at 10:13 PM Gary D. Gregory (Jira) 



> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-05-29 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850490#comment-17850490
 ] 

Gary D. Gregory commented on COLLECTIONS-855:
-

There is no compatibility to worry about:

The code was part of the first milestone release, we can change it.

We are not even talking about a binary or source compatibility issue.

IMO, we should do what is best for the long term.

> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-29 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 5/29/24 7:20 PM:
-

Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that receives the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that received the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-29 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 5/29/24 7:09 PM:
-

Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I analyzed the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that received the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I recorded the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that received the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-29 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 5/29/24 7:08 PM:
-

Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I recorded the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping into apxHandleWait you arrive at the method prunsrv.c::apxJavaWait 
that received the timeout value of 2000 milliseconds but the timeout value is 
effectively never used because the code keeps returning here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I recorded the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping down into the callstack in apxHandleWait you arrive at the method 
prunsrv.c::apxJavaWait that received the timeout value of 2000 milliseconds but 
the timeout value is effectively never used because the code keeps returning 
here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-29 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 5/29/24 7:08 PM:
-

Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I recorded the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping down into the callstack in apxHandleWait you arrive at the method 
prunsrv.c::apxJavaWait that received the timeout value of 2000 milliseconds but 
the timeout value is effectively never used because the code keeps returning 
here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I recorded the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping down into the callstack in apxHandlerWait you arrive at the method 
prunsrv.c::apxJavaWait that received the timeout value of 2000 milliseconds but 
the timeout value is effectively never used because the code keeps returning 
here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-29 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley edited comment on DAEMON-460 at 5/29/24 7:04 PM:
-

Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I recorded the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU core utilization issue. Pull request 
64, mentioned above, did add this loop so it could be related to the problem we 
are seeing.

Stepping down into the callstack in apxHandlerWait you arrive at the method 
prunsrv.c::apxJavaWait that received the timeout value of 2000 milliseconds but 
the timeout value is effectively never used because the code keeps returning 
here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark


was (Author: plasm0r):
Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I recorded the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU utilization issue. Pull request 64, 
mentioned above, did add this loop so it could be related to the problem we are 
seeing.

Stepping down into the callstack in apxHandlerWait you arrive at the method 
prunsrv.c::apxJavaWait that received the timeout value of 2000 milliseconds but 
the timeout value is effectively never used because the code keeps returning 
here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-29 Thread Mark Linley (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850467#comment-17850467
 ] 

Mark Linley commented on DAEMON-460:


Hi

Our Java application is managed as a Windows service by Apache Commons Daemon. 
Upgrading to v1.3.4 showed what others have been seeing. One of the CPU cores 
constantly sits at close to 100% CPU utilization.

I was able to successfully configure Visual Studio 2022 to do remote debugging 
of prunsrv.exe v1.3.4.

I recorded the CPU profile in Visual Studio once the debugger was connected to 
the remote prunsrv.exe process. I could see one of the threads of prunsrv.exe 
consuming almost 100% of the CPU. I looked at the top function, based on CPU 
usage, and it was prunsrv.c::apxJavaWait. With a breakpoint activated, my 
attention was drawn to this code in prunsrv.c::serviceMain :

!image-2024-05-29-15-56-35-585.png!

I could see in my debug session that the wait of 2 seconds is definitely not 
happening, resulting in the loop iterating as fast as the CPU core will allow 
it to, which likely explains the 100% CPU utilization issue. Pull request 64, 
mentioned above, did add this loop so it could be related to the problem we are 
seeing.

Stepping down into the callstack in apxHandlerWait you arrive at the method 
prunsrv.c::apxJavaWait that received the timeout value of 2000 milliseconds but 
the timeout value is effectively never used because the code keeps returning 
here:

!image-2024-05-29-15-57-37-665.png!

If someone more experienced in C or one of the maintainers could comment, I'd 
appreciate it. I'm a Java developer :)

Thanks!

Mark

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-29 Thread Mark Linley (Jira)


 [ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Linley updated DAEMON-460:
---
Attachment: image-2024-05-29-15-57-37-665.png

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png, image-2024-05-29-15-57-37-665.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (DAEMON-460) High CPU usage in prunsrv.exe since Daemon 1.3.3

2024-05-29 Thread Mark Linley (Jira)


 [ 
https://issues.apache.org/jira/browse/DAEMON-460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Linley updated DAEMON-460:
---
Attachment: image-2024-05-29-15-56-35-585.png

> High CPU usage in prunsrv.exe since Daemon 1.3.3
> 
>
> Key: DAEMON-460
> URL: https://issues.apache.org/jira/browse/DAEMON-460
> Project: Commons Daemon
>  Issue Type: Bug
>  Components: prunsrv
>Affects Versions: 1.3.3
>Reporter: Japie vd Linde
>Priority: Major
> Attachments: EspRun-Service-Log.2023-06-05.log, 
> image-2023-05-31-09-31-21-485.png, image-2023-06-05-13-38-38-435.png, 
> image-2024-05-29-15-56-35-585.png
>
>
> When using the --StopTimeout=30 parameter on service using prunsrv the CPU 
> usage is reported as very high on Windows. Rolling back to older prunsrv 
> seems to resolve the problem. 
> !image-2023-05-31-09-31-21-485.png!
> What could be the possible causes for this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (LANG-1733) Add null-safe Consumers.accept() and Functions.apply()

2024-05-29 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/LANG-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved LANG-1733.
---
Fix Version/s: 3.15.0
   Resolution: Fixed

> Add null-safe Consumers.accept() and Functions.apply()
> --
>
> Key: LANG-1733
> URL: https://issues.apache.org/jira/browse/LANG-1733
> Project: Commons Lang
>  Issue Type: New Feature
>Reporter: Jongjin Bae
>Priority: Major
> Fix For: 3.15.0
>
>
> I have a new suggestion about null handling.
> I usually check a object is null or not before using it to avoid NPE.
> It is pretty obvious, but It is quite cumbersome and has some overhead.
> So I want to introduce the following null-safety methods in ObjectUtils class 
> and make people easy to handle null without using if/else statement or 
> Optional class, etc.
> {code:java}
> public static  R applyIfNotNull(final T object, final Function 
> function) {
>     return object != null ? function.apply(object) : null;
> }
> public static  void acceptIfNotNull(final T object, final Consumer 
> consumer) {
>     if (object != null) {
>         consumer.accept(object);
>     }
> }
> {code}
> What do you think about it?
> If it looks good, I will implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (LANG-1733) Add null-safe Consumers.accept() and Functions.apply()

2024-05-29 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/LANG-1733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated LANG-1733:
--
Summary: Add null-safe Consumers.accept() and Functions.apply()  (was: 
`null` handling feature in ObjectUtils)

> Add null-safe Consumers.accept() and Functions.apply()
> --
>
> Key: LANG-1733
> URL: https://issues.apache.org/jira/browse/LANG-1733
> Project: Commons Lang
>  Issue Type: New Feature
>Reporter: Jongjin Bae
>Priority: Major
>
> I have a new suggestion about null handling.
> I usually check a object is null or not before using it to avoid NPE.
> It is pretty obvious, but It is quite cumbersome and has some overhead.
> So I want to introduce the following null-safety methods in ObjectUtils class 
> and make people easy to handle null without using if/else statement or 
> Optional class, etc.
> {code:java}
> public static  R applyIfNotNull(final T object, final Function 
> function) {
>     return object != null ? function.apply(object) : null;
> }
> public static  void acceptIfNotNull(final T object, final Consumer 
> consumer) {
>     if (object != null) {
>         consumer.accept(object);
>     }
> }
> {code}
> What do you think about it?
> If it looks good, I will implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-05-29 Thread Claude Warren (Jira)


[ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850427#comment-17850427
 ] 

Claude Warren commented on COLLECTIONS-855:
---

Users are expected to rely on the output.  They are expected to use it to
calculate hashes for the filters.  So any stored filters will not match if
there is a change.  If this code had been released this change would be
like the breaking change in the murmur3 hash in commons codec awhile back.






> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-05-29 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850349#comment-17850349
 ] 

Alex Herbert commented on COLLECTIONS-855:
--

It is not a breaking API change. It is a functional change. This may break code 
that relies on the output sequence. But this should be treated as a black box. 
Users should not be relying on the output, just like they should not rely on 
the hash code of an object being a specific value.

I already made the change locally. It only breaks the EnhancedDoubleHasher 
tests due to their hard coded expected sequence.

IIUC there are two ways to do it. The change with the extra test outside the 
loop will probably not have coverage:

Change the loop to:
{code:java}
// Old:
// for (int i = 0; i < k; i++) {
for (int i = 1; i <= k; i++) {
if (!consumer.test(index)) {
return false;
}
// Update index and handle wrapping
index -= inc;
index = index < 0 ? index + bits : index;

// Incorporate the counter into the increment to create 
a
// tetrahedral number additional term, and handle 
wrapping.
inc -= i;
inc = inc < 0 ? inc + bits : inc;
}
{code}

Change the loop to only compute an update if it is to be consumed:

{code:java}
if (!consumer.test(index)) {
return false;
}
for (int i = 1; i < k; i++) {
// Update index and handle wrapping
index -= inc;
index = index < 0 ? index + bits : index;

// Incorporate the counter into the increment to create 
a
// tetrahedral number additional term, and handle 
wrapping.
inc -= i;
inc = inc < 0 ? inc + bits : inc;

if (!consumer.test(index)) {
return false;
}
}
{code}


> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-05-29 Thread Claude Warren (Jira)


[ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850343#comment-17850343
 ] 

Claude Warren commented on COLLECTIONS-855:
---

With c/c Eu next week i am a bit slammed but I wanted to make sure this was
on the radar before we cut the next candidate.

If it is not resolved by the end of next week i will take a look at it

Claude




> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-05-29 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850338#comment-17850338
 ] 

Gary D. Gregory commented on COLLECTIONS-855:
-

PRs welcome :)

I'd like to go for an M2 release next, which I can cut anytime. 

> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-05-29 Thread Claude Warren (Jira)


 [ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claude Warren updated COLLECTIONS-855:
--
Component/s: Bloomfilter

> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>  Components: Bloomfilter
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (COLLECTIONS-855) Update the EnhancedDoubleHasher to correct the cube component of the hash

2024-05-29 Thread Claude Warren (Jira)


 [ 
https://issues.apache.org/jira/browse/COLLECTIONS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claude Warren updated COLLECTIONS-855:
--
Priority: Blocker  (was: Trivial)

This is a breaking API change and should be fixed before 4.5.0-M2 or release.

The effect is that the values generated from the 4.5.0-M1 EnhancedDoubleHasher 
will not be the same as values generated after this fix.

> Update the EnhancedDoubleHasher to correct the cube component of the hash
> -
>
> Key: COLLECTIONS-855
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-855
> Project: Commons Collections
>  Issue Type: Bug
>Affects Versions: 4.5.0-M1
>Reporter: Alex Herbert
>Priority: Blocker
>
> The EnhancedDoubleHasher currently computes the hash with the cube component 
> lagging by 1:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - ((i-1)^3 - (i-1))/6 ) wrapped in [0, 
> bits){noformat}
> Correct this to the intended:
> {noformat}
> hash[i] = ( h1(x) - i*h2(x) - (i*i*i - i)/6 ) wrapped in [0, bits){noformat}
> This is a simple change in the current controlling loop from:
> {code:java}
> for (int i = 0; i < k; i++) { {code}
> to:
> {code:java}
> for (int i = 1; i <= k; i++) { {code}
>  
> Issue notified by Juan Manuel Gimeno Illa on the Commons dev mailing list 
> (see [https://lists.apache.org/thread/wjmwxzozrtf41ko9r0g7pzrrg11o923o]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (CLI-335) Defining Default Properties documentation has errors.

2024-05-29 Thread Claude Warren (Jira)
Claude Warren created CLI-335:
-

 Summary: Defining Default Properties documentation has errors.
 Key: CLI-335
 URL: https://issues.apache.org/jira/browse/CLI-335
 Project: Commons CLI
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.8.0
Reporter: Claude Warren


  https://commons.apache.org/proper/commons-cli/properties.html  specifically 
links to the deprecated OptionBuilder class.  It should reference the 
Option.Builder (note the dot) class.

In addition there are methods defined in Option.Builder that are not described 
in the properties document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (COMPRESS-680) CVE-2024-25710 and CVE-2024-26308Indicates whether 7z decompression is involved.

2024-05-28 Thread Radar wen (Jira)
Radar wen created COMPRESS-680:
--

 Summary: CVE-2024-25710 and CVE-2024-26308Indicates whether 7z 
decompression is involved.
 Key: COMPRESS-680
 URL: https://issues.apache.org/jira/browse/COMPRESS-680
 Project: Commons Compress
  Issue Type: Bug
  Components: Archivers
Affects Versions: 1.21
Reporter: Radar wen


I cannot upgrade to the latest version due to historical issues,
Excuse me ,CVE-2024-25710 and CVE-2024-26308 Whether the SevenZArchiveEntry and 
SevenZFile classes for 7z decompression are involved.

org.apache.commons.compress.archivers.sevenz.SevenZArchiveEntry;
org.apache.commons.compress.archivers.sevenz.SevenZFile;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (VFS-853) Due to double weak references the file listener are not executed

2024-05-28 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved VFS-853.
-
Resolution: Fixed

PR merged.
TY [~b.eckenfels]!

> Due to double weak references the file listener are not executed
> 
>
> Key: VFS-853
> URL: https://issues.apache.org/jira/browse/VFS-853
> Project: Commons VFS
>  Issue Type: Bug
>Reporter: Bernd Eckenfels
>Assignee: Bernd Eckenfels
>Priority: Major
> Fix For: 2.10.0
>
>
> On DelegatedFileObjects the Listener is registered with a WeakReference 
> listener. The original code which did that has a (errounous) duplication of 
> listeners. This leads to the problem that the "middle" listener is never 
> referenced and therefore imediatelly collected, which in turn leads to 
> removal of the "outer" listener.
> I have added a testcase which reproduces the problem and does not fail when 
> duplication is removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (CLI-329) Support "Deprecated" CLI Options

2024-05-28 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CLI-329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved CLI-329.
-
Resolution: Fixed

> Support "Deprecated" CLI Options
> 
>
> Key: CLI-329
> URL: https://issues.apache.org/jira/browse/CLI-329
> Project: Commons CLI
>  Issue Type: New Feature
>Reporter: Eric Pugh
>Assignee: Gary D. Gregory
>Priority: Major
> Fix For: 1.7.0
>
>
> Per [https://lists.apache.org/thread/zj63psowkjvox3v3pr4zl7mdjtddk9zd] it 
> would be nice if as your CLI evolves you could mark a command line option as 
> deprecated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (VFS-854) SoftRefFilesCache logs password when FS is closed

2024-05-28 Thread Bernd Eckenfels (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Eckenfels resolved VFS-854.
-
Resolution: Fixed

> SoftRefFilesCache logs password when FS is closed
> -
>
> Key: VFS-854
> URL: https://issues.apache.org/jira/browse/VFS-854
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Andrey Turbanov
>Assignee: Bernd Eckenfels
>Priority: Major
> Fix For: 2.10.0
>
>
> We use DEBUG logging enabled for VFS to diagnose integration problems with 
> external connections.
> Unfortunately it leads to logging of clear text password in logs if use basic 
> auth with SFTP/HTTP4S connectors
> {noformat}
> D 240526 060013.993 [ScheduledIpfSynchronizer_Worker-1] SoftRefFilesCache - 
> Close FileSystem: http4s://mylogin:mypassw...@my.company.com:8443/
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (VFS-854) SoftRefFilesCache logs password when FS is closed

2024-05-28 Thread Bernd Eckenfels (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Eckenfels updated VFS-854:

Fix Version/s: 2.10.0

> SoftRefFilesCache logs password when FS is closed
> -
>
> Key: VFS-854
> URL: https://issues.apache.org/jira/browse/VFS-854
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Andrey Turbanov
>Assignee: Bernd Eckenfels
>Priority: Major
> Fix For: 2.10.0
>
>
> We use DEBUG logging enabled for VFS to diagnose integration problems with 
> external connections.
> Unfortunately it leads to logging of clear text password in logs if use basic 
> auth with SFTP/HTTP4S connectors
> {noformat}
> D 240526 060013.993 [ScheduledIpfSynchronizer_Worker-1] SoftRefFilesCache - 
> Close FileSystem: http4s://mylogin:mypassw...@my.company.com:8443/
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (VFS-854) SoftRefFilesCache logs password when FS is closed

2024-05-28 Thread Bernd Eckenfels (Jira)


 [ 
https://issues.apache.org/jira/browse/VFS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bernd Eckenfels updated VFS-854:

Assignee: Bernd Eckenfels

> SoftRefFilesCache logs password when FS is closed
> -
>
> Key: VFS-854
> URL: https://issues.apache.org/jira/browse/VFS-854
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Andrey Turbanov
>Assignee: Bernd Eckenfels
>Priority: Major
>
> We use DEBUG logging enabled for VFS to diagnose integration problems with 
> external connections.
> Unfortunately it leads to logging of clear text password in logs if use basic 
> auth with SFTP/HTTP4S connectors
> {noformat}
> D 240526 060013.993 [ScheduledIpfSynchronizer_Worker-1] SoftRefFilesCache - 
> Close FileSystem: http4s://mylogin:mypassw...@my.company.com:8443/
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (VFS-854) SoftRefFilesCache logs password when FS is closed

2024-05-28 Thread Bernd Eckenfels (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850087#comment-17850087
 ] 

Bernd Eckenfels edited comment on VFS-854 at 5/28/24 4:30 PM:
--

> You should maybe provide the creds separately

Agreed, however I propose that this debug log is changed to 
getRootName().getFriendlyURL() instead. Same for the throw in SftpFileSystem 
getChannel.

Unfortunatelly we cant change that for AbstractFileObject.toString() easily, 
anymore.


was (Author: b.eckenfels):
> You should maybe provide the creds separately

Agreed, however I propose that this debug log is changed to 
getRootName().getFriendlyURL() instead. Same for the throw in SftpFileSystem 
getChannel.

> SoftRefFilesCache logs password when FS is closed
> -
>
> Key: VFS-854
> URL: https://issues.apache.org/jira/browse/VFS-854
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Andrey Turbanov
>Priority: Major
>
> We use DEBUG logging enabled for VFS to diagnose integration problems with 
> external connections.
> Unfortunately it leads to logging of clear text password in logs if use basic 
> auth with SFTP/HTTP4S connectors
> {noformat}
> D 240526 060013.993 [ScheduledIpfSynchronizer_Worker-1] SoftRefFilesCache - 
> Close FileSystem: http4s://mylogin:mypassw...@my.company.com:8443/
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CLI-334) Fix Javadoc pathing

2024-05-28 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CLI-334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850086#comment-17850086
 ] 

Gary D. Gregory commented on CLI-334:
-

PR merged. TY [~epugh].

> Fix Javadoc pathing
> ---
>
> Key: CLI-334
> URL: https://issues.apache.org/jira/browse/CLI-334
> Project: Commons CLI
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.8.0
>Reporter: Eric Pugh
>Priority: Minor
> Fix For: 1.8.1
>
>
> I found some urls on the site to the javadocs that aren't quite right...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (CLI-334) Fix Javadoc pathing

2024-05-28 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CLI-334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved CLI-334.
-
Fix Version/s: 1.8.1
   Resolution: Fixed

> Fix Javadoc pathing
> ---
>
> Key: CLI-334
> URL: https://issues.apache.org/jira/browse/CLI-334
> Project: Commons CLI
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.8.0
>Reporter: Eric Pugh
>Priority: Minor
> Fix For: 1.8.1
>
>
> I found some urls on the site to the javadocs that aren't quite right...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-854) SoftRefFilesCache logs password when FS is closed

2024-05-28 Thread Bernd Eckenfels (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850087#comment-17850087
 ] 

Bernd Eckenfels commented on VFS-854:
-

> You should maybe provide the creds separately

Agreed, however I propose that this debug log is changed to 
getRootName().getFriendlyURL() instead. Same for the throw in SftpFileSystem 
getChannel.

> SoftRefFilesCache logs password when FS is closed
> -
>
> Key: VFS-854
> URL: https://issues.apache.org/jira/browse/VFS-854
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Andrey Turbanov
>Priority: Major
>
> We use DEBUG logging enabled for VFS to diagnose integration problems with 
> external connections.
> Unfortunately it leads to logging of clear text password in logs if use basic 
> auth with SFTP/HTTP4S connectors
> {noformat}
> D 240526 060013.993 [ScheduledIpfSynchronizer_Worker-1] SoftRefFilesCache - 
> Close FileSystem: http4s://mylogin:mypassw...@my.company.com:8443/
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CLI-334) Fix Javadoc pathing

2024-05-28 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/CLI-334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated CLI-334:

Summary: Fix Javadoc pathing  (was: Some bad javadoc links)

> Fix Javadoc pathing
> ---
>
> Key: CLI-334
> URL: https://issues.apache.org/jira/browse/CLI-334
> Project: Commons CLI
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.8.0
>Reporter: Eric Pugh
>Priority: Minor
>
> I found some urls on the site to the javadocs that aren't quite right...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (VFS-854) SoftRefFilesCache logs password when FS is closed

2024-05-28 Thread Michael Osipov (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17850081#comment-17850081
 ] 

Michael Osipov commented on VFS-854:


Well, it does log the URL, not the password. You should maybe provide the creds 
separately. Using auth info is deprecated in recent RFCs.

> SoftRefFilesCache logs password when FS is closed
> -
>
> Key: VFS-854
> URL: https://issues.apache.org/jira/browse/VFS-854
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Andrey Turbanov
>Priority: Major
>
> We use DEBUG logging enabled for VFS to diagnose integration problems with 
> external connections.
> Unfortunately it leads to logging of clear text password in logs if use basic 
> auth with SFTP/HTTP4S connectors
> {noformat}
> D 240526 060013.993 [ScheduledIpfSynchronizer_Worker-1] SoftRefFilesCache - 
> Close FileSystem: http4s://mylogin:mypassw...@my.company.com:8443/
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (CLI-334) Some bad javadoc links

2024-05-28 Thread Eric Pugh (Jira)
Eric Pugh created CLI-334:
-

 Summary: Some bad javadoc links
 Key: CLI-334
 URL: https://issues.apache.org/jira/browse/CLI-334
 Project: Commons CLI
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.8.0
Reporter: Eric Pugh


I found some urls on the site to the javadocs that aren't quite right...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (VFS-854) SoftRefFilesCache logs password when FS is closed

2024-05-28 Thread Andrey Turbanov (Jira)
Andrey Turbanov created VFS-854:
---

 Summary: SoftRefFilesCache logs password when FS is closed
 Key: VFS-854
 URL: https://issues.apache.org/jira/browse/VFS-854
 Project: Commons VFS
  Issue Type: Bug
Affects Versions: 2.9.0
Reporter: Andrey Turbanov


We use DEBUG logging enabled for VFS to diagnose integration problems with 
external connections.
Unfortunately it leads to logging of clear text password in logs if use basic 
auth with SFTP/HTTP4S connectors
{noformat}
D 240526 060013.993 [ScheduledIpfSynchronizer_Worker-1] SoftRefFilesCache - 
Close FileSystem: http4s://mylogin:mypassw...@my.company.com:8443/
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (VFS-853) Due to double weak references the file listener are not executed

2024-05-27 Thread Bernd Eckenfels (Jira)
Bernd Eckenfels created VFS-853:
---

 Summary: Due to double weak references the file listener are not 
executed
 Key: VFS-853
 URL: https://issues.apache.org/jira/browse/VFS-853
 Project: Commons VFS
  Issue Type: Bug
Reporter: Bernd Eckenfels
Assignee: Bernd Eckenfels
 Fix For: 2.10.0


On DelegatedFileObjects the Listener is registered with a WeakReference 
listener. The original code which did that has a (errounous) duplication of 
listeners. This leads to the problem that the "middle" listener is never 
referenced and therefore imediatelly collected, which in turn leads to removal 
of the "outer" listener.

I have added a testcase which reproduces the problem and does not fail when 
duplication is removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NET-730) Cannot connect to FTP server with HTTP proxy

2024-05-27 Thread Johannes Thalmair (Jira)


[ 
https://issues.apache.org/jira/browse/NET-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849656#comment-17849656
 ] 

Johannes Thalmair commented on NET-730:
---

I tested with the snapshot and it works again. Thank you.

> Cannot connect to FTP server with HTTP proxy
> 
>
> Key: NET-730
> URL: https://issues.apache.org/jira/browse/NET-730
> Project: Commons Net
>  Issue Type: Bug
>  Components: FTP
>Affects Versions: 3.10.0
>Reporter: Johannes Thalmair
>Assignee: Gary D. Gregory
>Priority: Major
> Fix For: 3.11.0
>
>
> After updating from Commons Net 3.9.0 to 3.10.0, I can no longer connect to 
> an FTP server with an HTTP proxy that requires authorization. Sadly I do not 
> have direct access to that server and don't know which proxy is running 
> there. A try to connect just blocks for 5 minutes and then fails with an 
> IOException: No response from proxy
>     at org.apache.commons.net.ftp.FTPHTTPClient 
> .tunnelHandshake(FTPHTTPClient .java:209)
>     at org.apache.commons.net.ftp.FTPHTTPClient .connect(FTPHTTPClient 
> .java:173)
>  
> I'm using the org.apache.commons.net.ftp.FTPHTTPClient for connecting and 
> already did some debugging. The change that causes my problem is the switch 
> from the deprecated 
> org.apache.commons.net.util.Base64.{{{}encodeToString(byte[]){}}} to 
> java.util.Base64.getEncoder().encodeToString({{{}byte[]{}}}) to encode the 
> Proxy-Authorization header in FTPHTTPClient.tunnelHandshake() (see 
> [https://github.com/apache/commons-net/commit/396bade29ad98d20a2c039ac561db56b63018b39])
>  The old encoding method added a CRLF / "\r\n" to the end of the String, 
> while the new one does not. This specific proxy seems to expect it, I don't 
> know if others do, too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NET-730) Cannot connect to FTP server with HTTP proxy

2024-05-26 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/NET-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory resolved NET-730.
-
Fix Version/s: 3.11.0
   Resolution: Fixed

[~jthalmai]

Fixed in git master and snapshot builds in 
https://repository.apache.org/content/repositories/snapshots/commons-net/commons-net/3.11.0-SNAPSHOT/

Please verify with your use case and let me know.



> Cannot connect to FTP server with HTTP proxy
> 
>
> Key: NET-730
> URL: https://issues.apache.org/jira/browse/NET-730
> Project: Commons Net
>  Issue Type: Bug
>  Components: FTP
>Affects Versions: 3.10.0
>Reporter: Johannes Thalmair
>Assignee: Gary D. Gregory
>Priority: Major
> Fix For: 3.11.0
>
>
> After updating from Commons Net 3.9.0 to 3.10.0, I can no longer connect to 
> an FTP server with an HTTP proxy that requires authorization. Sadly I do not 
> have direct access to that server and don't know which proxy is running 
> there. A try to connect just blocks for 5 minutes and then fails with an 
> IOException: No response from proxy
>     at org.apache.commons.net.ftp.FTPHTTPClient 
> .tunnelHandshake(FTPHTTPClient .java:209)
>     at org.apache.commons.net.ftp.FTPHTTPClient .connect(FTPHTTPClient 
> .java:173)
>  
> I'm using the org.apache.commons.net.ftp.FTPHTTPClient for connecting and 
> already did some debugging. The change that causes my problem is the switch 
> from the deprecated 
> org.apache.commons.net.util.Base64.{{{}encodeToString(byte[]){}}} to 
> java.util.Base64.getEncoder().encodeToString({{{}byte[]{}}}) to encode the 
> Proxy-Authorization header in FTPHTTPClient.tunnelHandshake() (see 
> [https://github.com/apache/commons-net/commit/396bade29ad98d20a2c039ac561db56b63018b39])
>  The old encoding method added a CRLF / "\r\n" to the end of the String, 
> while the new one does not. This specific proxy seems to expect it, I don't 
> know if others do, too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NET-730) Cannot connect to FTP server with HTTP proxy

2024-05-26 Thread Gary D. Gregory (Jira)


 [ 
https://issues.apache.org/jira/browse/NET-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary D. Gregory updated NET-730:

Assignee: Gary D. Gregory

> Cannot connect to FTP server with HTTP proxy
> 
>
> Key: NET-730
> URL: https://issues.apache.org/jira/browse/NET-730
> Project: Commons Net
>  Issue Type: Bug
>  Components: FTP
>Affects Versions: 3.10.0
>Reporter: Johannes Thalmair
>Assignee: Gary D. Gregory
>Priority: Major
>
> After updating from Commons Net 3.9.0 to 3.10.0, I can no longer connect to 
> an FTP server with an HTTP proxy that requires authorization. Sadly I do not 
> have direct access to that server and don't know which proxy is running 
> there. A try to connect just blocks for 5 minutes and then fails with an 
> IOException: No response from proxy
>     at org.apache.commons.net.ftp.FTPHTTPClient 
> .tunnelHandshake(FTPHTTPClient .java:209)
>     at org.apache.commons.net.ftp.FTPHTTPClient .connect(FTPHTTPClient 
> .java:173)
>  
> I'm using the org.apache.commons.net.ftp.FTPHTTPClient for connecting and 
> already did some debugging. The change that causes my problem is the switch 
> from the deprecated 
> org.apache.commons.net.util.Base64.{{{}encodeToString(byte[]){}}} to 
> java.util.Base64.getEncoder().encodeToString({{{}byte[]{}}}) to encode the 
> Proxy-Authorization header in FTPHTTPClient.tunnelHandshake() (see 
> [https://github.com/apache/commons-net/commit/396bade29ad98d20a2c039ac561db56b63018b39])
>  The old encoding method added a CRLF / "\r\n" to the end of the String, 
> while the new one does not. This specific proxy seems to expect it, I don't 
> know if others do, too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (TEXT-234) Improve StrBuilder documentation for new line text

2024-05-26 Thread TobiasKiecker (Jira)


 [ 
https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TobiasKiecker closed TEXT-234.
--
Resolution: Fixed

> Improve StrBuilder documentation for new line text
> --
>
> Key: TEXT-234
> URL: https://issues.apache.org/jira/browse/TEXT-234
> Project: Commons Text
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: TobiasKiecker
>Priority: Minor
>  Labels: documentation
>
> The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have 
> ambiguous documentation. If someone were to extend the class and override 
> _appendNewLine_ null would not be handled anymore.
> The docstring of s{_}etNewlineText{_} implies that THIS function does the 
> handling, while in truth it is done in _appendNewLine._



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (TEXT-234) Improve StrBuilder documentation for new line text

2024-05-26 Thread TobiasKiecker (Jira)


[ 
https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849514#comment-17849514
 ] 

TobiasKiecker commented on TEXT-234:


Yes, I think so.  I will close this and make a new one if we find anything 
else.  Thanks a lot.

> Improve StrBuilder documentation for new line text
> --
>
> Key: TEXT-234
> URL: https://issues.apache.org/jira/browse/TEXT-234
> Project: Commons Text
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: TobiasKiecker
>Priority: Minor
>  Labels: documentation
>
> The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have 
> ambiguous documentation. If someone were to extend the class and override 
> _appendNewLine_ null would not be handled anymore.
> The docstring of s{_}etNewlineText{_} implies that THIS function does the 
> handling, while in truth it is done in _appendNewLine._



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (TEXT-217) Snake case utility method: CaseUtils.toSnakeCase(....)

2024-05-25 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/TEXT-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849453#comment-17849453
 ] 

Gary D. Gregory commented on TEXT-217:
--

Hi [~claude]

This could both be useful but also a source of endless requests and/or bugs, 
which is why I would like to see exact format definitions or links to 
definitions. For example, for me, camel case starts with a lowercase, like a 
Java method name.

I have many projects at work that use custom converters but I am not sure how 
this would fit in because I prefer to see real tests than what is currently in 
PR 552. It also does not seem to match what I think of camel case.

This is what I have custom converters for at work in different products:

The input can be any of XML Schema, WSDL (SOAP), Swagger 2, Open API 3.x, COBOL 
copybooks, database tables (table names and column names), and probably other 
specifications I can't recall. I've worked on many products! ;-)

Then, for example, I need to take an XML element name and make that a Java 
class name and/or a method name; the same for an XML attribute name. Another 
example is taking XML element and attribute names and turning those into 
Swagger 2 and Open API 3 keys. In the case of converting into XML or into Open 
API, it's not good enough for the names to be legal, they have to be "pretty", 
in the conventions of a format. In Open API, that's camel case starting with a 
lowercase letter. For XML, there are different conventions, so we pick one.



> Snake case utility method: CaseUtils.toSnakeCase()
> --
>
> Key: TEXT-217
>     URL: https://issues.apache.org/jira/browse/TEXT-217
> Project: Commons Text
>  Issue Type: New Feature
>Affects Versions: 1.9
>Reporter: Adil Iqbal
>Assignee: Claude Warren
>Priority: Major
> Fix For: 1.12.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Requesting a feature to convert any string to snake case, as per 
> CaseUtils.toCamelCase(...)
> *Rationale:*
> As per the OpenAPI Specification 3.0, keys should be in snake case. There is 
> currently no common utility that can be used to accomplish that task.
> Any interaction between Java and Python is hindered, since Python uses snake 
> case as a best practice.
> *Feature Set Requested:*
> All features currently included in CaseUtils.toCamelCase(...) sans 
> capitalization flag. As you know, the capitalization flag was implemented to 
> support PascalCase, which is a convention even in Java, for many situations. 
> There is no equivalent for snake case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (TEXT-217) Snake case utility method: CaseUtils.toSnakeCase(....)

2024-05-25 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/TEXT-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849452#comment-17849452
 ] 

Gary D. Gregory commented on TEXT-217:
--

Where does OpenAPI require snake case for keys?

I only see camel case in https://spec.openapis.org/oas/v3.1.0, for example 
"termsOfService".

> Snake case utility method: CaseUtils.toSnakeCase()
> --
>
> Key: TEXT-217
> URL: https://issues.apache.org/jira/browse/TEXT-217
> Project: Commons Text
>  Issue Type: New Feature
>Affects Versions: 1.9
>Reporter: Adil Iqbal
>Assignee: Claude Warren
>Priority: Major
> Fix For: 1.12.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Requesting a feature to convert any string to snake case, as per 
> CaseUtils.toCamelCase(...)
> *Rationale:*
> As per the OpenAPI Specification 3.0, keys should be in snake case. There is 
> currently no common utility that can be used to accomplish that task.
> Any interaction between Java and Python is hindered, since Python uses snake 
> case as a best practice.
> *Feature Set Requested:*
> All features currently included in CaseUtils.toCamelCase(...) sans 
> capitalization flag. As you know, the capitalization flag was implemented to 
> support PascalCase, which is a convention even in Java, for many situations. 
> There is no equivalent for snake case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (TEXT-217) Snake case utility method: CaseUtils.toSnakeCase(....)

2024-05-25 Thread Claude Warren (Jira)


 [ 
https://issues.apache.org/jira/browse/TEXT-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claude Warren updated TEXT-217:
---
Fix Version/s: 1.12.1
   (was: 1.9)

> Snake case utility method: CaseUtils.toSnakeCase()
> --
>
> Key: TEXT-217
> URL: https://issues.apache.org/jira/browse/TEXT-217
> Project: Commons Text
>  Issue Type: New Feature
>Affects Versions: 1.9
>Reporter: Adil Iqbal
>Assignee: Claude Warren
>Priority: Major
> Fix For: 1.12.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Requesting a feature to convert any string to snake case, as per 
> CaseUtils.toCamelCase(...)
> *Rationale:*
> As per the OpenAPI Specification 3.0, keys should be in snake case. There is 
> currently no common utility that can be used to accomplish that task.
> Any interaction between Java and Python is hindered, since Python uses snake 
> case as a best practice.
> *Feature Set Requested:*
> All features currently included in CaseUtils.toCamelCase(...) sans 
> capitalization flag. As you know, the capitalization flag was implemented to 
> support PascalCase, which is a convention even in Java, for many situations. 
> There is no equivalent for snake case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (TEXT-217) Snake case utility method: CaseUtils.toSnakeCase(....)

2024-05-25 Thread Claude Warren (Jira)


[ 
https://issues.apache.org/jira/browse/TEXT-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849450#comment-17849450
 ] 

Claude Warren commented on TEXT-217:


pull request #552 solves the problem by creating a generic converter that 
should be able to handle any conversion that can be done with look ahead 
parsing.

> Snake case utility method: CaseUtils.toSnakeCase()
> --
>
> Key: TEXT-217
> URL: https://issues.apache.org/jira/browse/TEXT-217
> Project: Commons Text
>  Issue Type: New Feature
>Affects Versions: 1.9
>Reporter: Adil Iqbal
>Assignee: Claude Warren
>Priority: Major
> Fix For: 1.9
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Requesting a feature to convert any string to snake case, as per 
> CaseUtils.toCamelCase(...)
> *Rationale:*
> As per the OpenAPI Specification 3.0, keys should be in snake case. There is 
> currently no common utility that can be used to accomplish that task.
> Any interaction between Java and Python is hindered, since Python uses snake 
> case as a best practice.
> *Feature Set Requested:*
> All features currently included in CaseUtils.toCamelCase(...) sans 
> capitalization flag. As you know, the capitalization flag was implemented to 
> support PascalCase, which is a convention even in Java, for many situations. 
> There is no equivalent for snake case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (TEXT-217) Snake case utility method: CaseUtils.toSnakeCase(....)

2024-05-25 Thread Claude Warren (Jira)


 [ 
https://issues.apache.org/jira/browse/TEXT-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claude Warren updated TEXT-217:
---
Assignee: Claude Warren

> Snake case utility method: CaseUtils.toSnakeCase()
> --
>
> Key: TEXT-217
> URL: https://issues.apache.org/jira/browse/TEXT-217
> Project: Commons Text
>  Issue Type: New Feature
>Affects Versions: 1.9
>Reporter: Adil Iqbal
>Assignee: Claude Warren
>Priority: Major
> Fix For: 1.9
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Requesting a feature to convert any string to snake case, as per 
> CaseUtils.toCamelCase(...)
> *Rationale:*
> As per the OpenAPI Specification 3.0, keys should be in snake case. There is 
> currently no common utility that can be used to accomplish that task.
> Any interaction between Java and Python is hindered, since Python uses snake 
> case as a best practice.
> *Feature Set Requested:*
> All features currently included in CaseUtils.toCamelCase(...) sans 
> capitalization flag. As you know, the capitalization flag was implemented to 
> support PascalCase, which is a convention even in Java, for many situations. 
> There is no equivalent for snake case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (TEXT-234) Improve StrBuilder documentation for new line text

2024-05-24 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849418#comment-17849418
 ] 

Gary D. Gregory edited comment on TEXT-234 at 5/25/24 2:13 AM:
---

Hello [~tobiaskiecker]
All set?



was (Author: garydgregory):
Helo [~tobiaskiecker]
All set?


> Improve StrBuilder documentation for new line text
> --
>
> Key: TEXT-234
> URL: https://issues.apache.org/jira/browse/TEXT-234
> Project: Commons Text
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: TobiasKiecker
>Priority: Minor
>  Labels: documentation
>
> The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have 
> ambiguous documentation. If someone were to extend the class and override 
> _appendNewLine_ null would not be handled anymore.
> The docstring of s{_}etNewlineText{_} implies that THIS function does the 
> handling, while in truth it is done in _appendNewLine._



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (TEXT-234) Improve StrBuilder documentation for new line text

2024-05-24 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/TEXT-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849418#comment-17849418
 ] 

Gary D. Gregory commented on TEXT-234:
--

Helo [~tobiaskiecker]
All set?


> Improve StrBuilder documentation for new line text
> --
>
> Key: TEXT-234
> URL: https://issues.apache.org/jira/browse/TEXT-234
> Project: Commons Text
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: TobiasKiecker
>Priority: Minor
>  Labels: documentation
>
> The method _setNewLineText_ in both _StrBuilder_ and _TextStringBuilder_ have 
> ambiguous documentation. If someone were to extend the class and override 
> _appendNewLine_ null would not be handled anymore.
> The docstring of s{_}etNewlineText{_} implies that THIS function does the 
> handling, while in truth it is done in _appendNewLine._



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (JEXL-422) Add strict equality (===) and inequality (!==) operators

2024-05-24 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/JEXL-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro resolved JEXL-422.

Resolution: Fixed

[Commited|https://github.com/apache/commons-jexl/commit/b640ba6820eb07ffc23043f118a3497f64339df5]

> Add strict equality (===) and inequality (!==) operators
> 
>
> Key: JEXL-422
> URL: https://issues.apache.org/jira/browse/JEXL-422
> Project: Commons JEXL
>  Issue Type: New Feature
>Affects Versions: 3.3
>Reporter: Henri Biestro
>Assignee: Henri Biestro
>Priority: Minor
> Fix For: 3.4
>
>
> As in Javascript,  === is a comparison operator that checks the equality of 
> two values without performing any type conversion. This means that if the 
> values being compared have different data types, === will return false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (JEXL-423) Add support for instanceof / !instanceof

2024-05-24 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/JEXL-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro resolved JEXL-423.

Resolution: Fixed

[Commited|https://github.com/apache/commons-jexl/commit/b640ba6820eb07ffc23043f118a3497f64339df5]

> Add support for instanceof / !instanceof
> 
>
> Key: JEXL-423
> URL: https://issues.apache.org/jira/browse/JEXL-423
> Project: Commons JEXL
>  Issue Type: New Feature
>Affects Versions: 3.3
>Reporter: Henri Biestro
>Assignee: Henri Biestro
>Priority: Minor
> Fix For: 3.4
>
>
> The *instanceof*  operator allows to check whether an object belongs to a 
> certain class.
> It is using Class.isInstance to perform the check. As a convenience, {{ 
> !instanceof }} is supported as well avoiding parenthesis as in:
> {code:java}x !instanceof y{code} is equivalent to  {code:java} !(x instanceof 
> y){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (JEXL-396) Add explicit Java module descriptor

2024-05-24 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/JEXL-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro resolved JEXL-396.

Resolution: Fixed

> Add explicit Java module descriptor
> ---
>
> Key: JEXL-396
> URL: https://issues.apache.org/jira/browse/JEXL-396
> Project: Commons JEXL
>  Issue Type: Improvement
>Affects Versions: 3.3
>Reporter: Andres Almiray
>Assignee: Henri Biestro
>Priority: Major
> Fix For: 3.3.1
>
>
> Follow up from a 
> [topic|https://lists.apache.org/thread/kxcwqyx026rhhx4v8q8bkbljj7lw8c32] 
> started at the mailing list.
> Henri suggested using the ModiTect plugin guarded by a profile with JDK 
> activation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (JEXL-235) Verify JexlScriptEngineFactory.{getLanguageVersion,getEngineVersion} before release

2024-05-24 Thread Henri Biestro (Jira)


[ 
https://issues.apache.org/jira/browse/JEXL-235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849306#comment-17849306
 ] 

Henri Biestro commented on JEXL-235:


Checked [3.4| 
https://github.com/apache/commons-jexl/commit/b640ba6820eb07ffc23043f118a3497f64339df5].

> Verify JexlScriptEngineFactory.{getLanguageVersion,getEngineVersion} before 
> release
> ---
>
> Key: JEXL-235
> URL: https://issues.apache.org/jira/browse/JEXL-235
> Project: Commons JEXL
>  Issue Type: Task
>Affects Versions: 3.2
>Reporter: Henri Biestro
>Assignee: Henri Biestro
>Priority: Major
> Fix For: Later
>
>
> JexlScriptEngineFactory.getLanguageVersion and 
> JexlScriptEngineFactory.getEngine version should reflect the syntax version 
> and the engine version respectively.
> As a rule, any new operator or syntax should bump the language version, any 
> release should update the engine version that should match the jar version.
> (see JEXL-227 for discussion on the issue).
> This task must be checked for each version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (JEXL-397) Dynamic proxy should not require specific permission

2024-05-24 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/JEXL-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro updated JEXL-397:
---
Fix Version/s: 3.3.1
   (was: 3.4)

> Dynamic proxy should not require specific permission
> 
>
> Key: JEXL-397
> URL: https://issues.apache.org/jira/browse/JEXL-397
> Project: Commons JEXL
>  Issue Type: Bug
>Affects Versions: 3.3
>Reporter: Henri Biestro
>Assignee: Henri Biestro
>Priority: Minor
> Fix For: 3.3.1
>
>
> With the default restricted permissions, dynamic proxies can not be 
> introspected since they extend java.lang.reflect.Proxy whose package is 
> denied.
> A workaround is to explicitly allow them as in:
> {code:java}
> JexlPermissions p = new JexlPermissions.Delegate(JexlPermissions.RESTRICTED) {
>   @Override public boolean allow(Class clazz) {
> return Proxy.isProxyClass(clazz) || super.allow(clazz);
>   }
> };
> {code}
> This workaround should not be necessary.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (JEXL-396) Add explicit Java module descriptor

2024-05-24 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/JEXL-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro updated JEXL-396:
---
Fix Version/s: 3.3.1
   (was: 3.4)

> Add explicit Java module descriptor
> ---
>
> Key: JEXL-396
> URL: https://issues.apache.org/jira/browse/JEXL-396
> Project: Commons JEXL
>  Issue Type: Improvement
>Affects Versions: 3.3
>Reporter: Andres Almiray
>Assignee: Henri Biestro
>Priority: Major
> Fix For: 3.3.1
>
>
> Follow up from a 
> [topic|https://lists.apache.org/thread/kxcwqyx026rhhx4v8q8bkbljj7lw8c32] 
> started at the mailing list.
> Henri suggested using the ModiTect plugin guarded by a profile with JDK 
> activation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (JEXL-396) Add explicit Java module descriptor

2024-05-24 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/JEXL-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro updated JEXL-396:
---
Fix Version/s: 3.4

> Add explicit Java module descriptor
> ---
>
> Key: JEXL-396
> URL: https://issues.apache.org/jira/browse/JEXL-396
> Project: Commons JEXL
>  Issue Type: Improvement
>Affects Versions: 3.3
>Reporter: Andres Almiray
>Assignee: Henri Biestro
>Priority: Major
> Fix For: 3.4
>
>
> Follow up from a 
> [topic|https://lists.apache.org/thread/kxcwqyx026rhhx4v8q8bkbljj7lw8c32] 
> started at the mailing list.
> Henri suggested using the ModiTect plugin guarded by a profile with JDK 
> activation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (JEXL-423) Add support for instanceof / !instanceof

2024-05-24 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/JEXL-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro updated JEXL-423:
---
Description: 
The *instanceof*  operator allows to check whether an object belongs to a 
certain class.
It is using Class.isInstance to perform the check. As a convenience, {{ 
!instanceof }} is supported as well avoiding parenthesis as in:
{code:java}x !instanceof y{code} is equivalent to  {code:java} !(x instanceof 
y){code}


  was:
The  {code:java} instanceof {code}
 operator allows to check whether an object belongs to a certain class.
It is using Class.isInstance to perform the check. As a convenience, {{ 
!instanceof }} is supported as well avoiding parenthesis as in:
{code:java}x !instanceof y{code} is equivalent to  {code:java} !(x instanceof 
y){code}



> Add support for instanceof / !instanceof
> 
>
> Key: JEXL-423
> URL: https://issues.apache.org/jira/browse/JEXL-423
> Project: Commons JEXL
>  Issue Type: New Feature
>Affects Versions: 3.3
>Reporter: Henri Biestro
>Assignee: Henri Biestro
>Priority: Minor
> Fix For: 3.3.1
>
>
> The *instanceof*  operator allows to check whether an object belongs to a 
> certain class.
> It is using Class.isInstance to perform the check. As a convenience, {{ 
> !instanceof }} is supported as well avoiding parenthesis as in:
> {code:java}x !instanceof y{code} is equivalent to  {code:java} !(x instanceof 
> y){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (JEXL-423) Add support for instanceof / !instanceof

2024-05-24 Thread Henri Biestro (Jira)


 [ 
https://issues.apache.org/jira/browse/JEXL-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro updated JEXL-423:
---
Description: 
The  {code:java} instanceof {code}
 operator allows to check whether an object belongs to a certain class.
It is using Class.isInstance to perform the check. As a convenience, {{ 
!instanceof }} is supported as well avoiding parenthesis as in:
{code:java}x !instanceof y{code} is equivalent to  {code:java} !(x instanceof 
y){code}


  was:
The {{instanceof}} operator allows to check whether an object belongs to a 
certain class.
It is using Class.isInstance to perform the check. As a convenience, 
{{!instanceof}} is supported as well avoiding parenthesis as in:
{{x !instanceof y}} is equivalent to {{!(x instanceof y)}}


> Add support for instanceof / !instanceof
> 
>
> Key: JEXL-423
> URL: https://issues.apache.org/jira/browse/JEXL-423
> Project: Commons JEXL
>  Issue Type: New Feature
>Affects Versions: 3.3
>Reporter: Henri Biestro
>Assignee: Henri Biestro
>Priority: Minor
> Fix For: 3.3.1
>
>
> The  {code:java} instanceof {code}
>  operator allows to check whether an object belongs to a certain class.
> It is using Class.isInstance to perform the check. As a convenience, {{ 
> !instanceof }} is supported as well avoiding parenthesis as in:
> {code:java}x !instanceof y{code} is equivalent to  {code:java} !(x instanceof 
> y){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (JEXL-423) Add support for instanceof / !instanceof

2024-05-24 Thread Henri Biestro (Jira)
Henri Biestro created JEXL-423:
--

 Summary: Add support for instanceof / !instanceof
 Key: JEXL-423
 URL: https://issues.apache.org/jira/browse/JEXL-423
 Project: Commons JEXL
  Issue Type: New Feature
Affects Versions: 3.3
Reporter: Henri Biestro
Assignee: Henri Biestro
 Fix For: 3.3.1


The {{instanceof}} operator allows to check whether an object belongs to a 
certain class.
It is using Class.isInstance to perform the check. As a convenience, 
{{!instanceof}} is supported as well avoiding parenthesis as in:
{{x !instanceof y}} is equivalent to {{!(x instanceof y)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (JEXL-422) Add strict equality (===) and inequality (!==) operators

2024-05-24 Thread Henri Biestro (Jira)
Henri Biestro created JEXL-422:
--

 Summary: Add strict equality (===) and inequality (!==) operators
 Key: JEXL-422
 URL: https://issues.apache.org/jira/browse/JEXL-422
 Project: Commons JEXL
  Issue Type: New Feature
Affects Versions: 3.3
Reporter: Henri Biestro
Assignee: Henri Biestro
 Fix For: 3.3.1


As in Javascript,  === is a comparison operator that checks the equality of two 
values without performing any type conversion. This means that if the values 
being compared have different data types, === will return false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IO-808) FileUtils.moveFile, copyFile and others can throw undocumened IllegalArgumentException

2024-05-24 Thread Elliotte Rusty Harold (Jira)


[ 
https://issues.apache.org/jira/browse/IO-808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849258#comment-17849258
 ] 

Elliotte Rusty Harold commented on IO-808:
--

A commons library is not the place to litigate language design questions. Nor 
is it a Kotlin library, It's a Java one, so I don't really see what the point 
of the stack overflow thread is.

For good developer experience, it's essential that libraries follow the design, 
idioms, and patterns of the language they're written for, in this case Java. 
Breaking with established and deliberate Java practice on anything — not just 
exceptions, but object construction, multithreading, access protection, package 
naming, and a hundred other things — is a fatal and all too common flaw among 
library maintainers who think they know better than the developers who designed 
the language. Maybe they do, but it doesn't matter. Moving away from what 
developers have learned to expect in a language leads directly to bugs and 
developer pain.

Java libraries either use the Java language and its standard libraries, 
including IOException and IllegalArgumentException, the way they were designed 
to be used, or the library becomes hard to use and bug prone. 

On exceptions in particular, see Section 10 of Effective Java, 3rd edition, 
particularly Item 72: Favor the use of standard exceptions.

> FileUtils.moveFile, copyFile and others can throw undocumened 
> IllegalArgumentException
> --
>
> Key: IO-808
> URL: https://issues.apache.org/jira/browse/IO-808
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.12.0
> Environment: Windows 10
>Reporter: Phil D
>Priority: Major
> Attachments: MakyAckyBreaky.java, TestMoveFileIAE.java
>
>
> Several of the functions in FileUtils are throwing undocumented 
> IllegalArgumentException such as moveFile, copyFile and other locations. 
> If the desire is to maintain backwards compatibility with the 1.4 branch for 
> these functions, then the 2.12 (and 2.13) versions are throwing 
> IllegalArgumentException in cases  where 1.4 is not.  In fact, it seems like 
> 1.4 was coded to specifically avoid IllegalArgumentException and throws 
> IOExceptions instead.
> There are several different cases where this is possible.  In the most basic, 
> I've attached TestMoveFileIAE, where this can be reproduced by simple running:
> {code:bash}
> mkdir one
> java -cp  TestMoveFileIAE one two
> Exception in thread "main" java.lang.IllegalArgumentException: Parameter 
> 'srcFile' is not a file: one
> at org.apache.commons.io.FileUtils.requireFile(FileUtils.java:2824)
> at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:2395)
> at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:2374)
> at TestMoveFileIAE.main(TestMoveFileIAE.java:13)
> {code}
> In a less likely scenario (which is how I found this issue because this 
> happened on a production system); If the srcFile is removed at a certain 
> point during moveFile() execution then IllegalArgumentException is throws:
> https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/FileUtils.java#L2392
> {code:java}
> 2392public static void moveFile(final File srcFile, final File destFile, 
> final CopyOption... copyOptions) throws IOException {
> 2393validateMoveParameters(srcFile, destFile); // checks srcFile.exists()
>   ///  srcFile deleted here!!!
> 2394requireFile(srcFile, "srcFile");   // checks srcFile.isFile() 
> and throws IAE
> 2395requireAbsent(destFile, "destFile");
>   ///  srcFile could also be deleted here 
> 2396... // renameTo or copyFile() which also calls requireCopyFile() and 
> requireFile()
> {code}
> This pattern of calling validateMoveParameters() and requireFile() will throw 
> IllegalArgumentException every when the srcFile is removed between between 
> validateMoveParameters() and requireFile() or requireFileCopy() and 
> requireFile()
> Preferably, it would be best if the 2.x versions of FileUtils were backwards 
> compatible with 1.x and IllegalArgumentException would not be thrown, but 
> IOException (or one of its derivatives) would be.   IAE is an unchecked 
> exception and can cause unexpected issues.
> I would also suggest that unit tests be created to ensure that these 
> functions behave as expected in error conditions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CLI-321) Add and use a Converter interface and implementations without using BeanUtils

2024-05-23 Thread Claude Warren (Jira)


 [ 
https://issues.apache.org/jira/browse/CLI-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claude Warren updated CLI-321:
--
Description: The current TypeHandler implementation notes indicate that the 
BeanUtils.Converters should be used to create instances of the various types.  
This issue is to complete the implementation of TypeHandler so that it does NOT 
use the BeanUtils.Converters.  (was: The current TypeHandler implementation 
notes indicate that the BeanUtils.Converters should be used to create instances 
of the various types.  This issue is to complete the implementation of 
TypeHandler so that it uses the BeanUtils.Converters.)

> Add and use a Converter interface and implementations without using BeanUtils 
> --
>
> Key: CLI-321
> URL: https://issues.apache.org/jira/browse/CLI-321
> Project: Commons CLI
>  Issue Type: Improvement
>  Components: Parser
>Affects Versions: 1.6.0
>Reporter: Claude Warren
>Assignee: Claude Warren
>Priority: Minor
> Fix For: 1.7.0
>
>
> The current TypeHandler implementation notes indicate that the 
> BeanUtils.Converters should be used to create instances of the various types. 
>  This issue is to complete the implementation of TypeHandler so that it does 
> NOT use the BeanUtils.Converters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


<    3   4   5   6   7   8   9   10   11   12   >