[jira] [Created] (MATH-1495) Calling NaturalRanking#rank() on an array of all NaNs throws a misleading ArrayIndexOutOfBoundsException when the NanStrategy is REMOVED

2019-08-08 Thread Akash Srivastava (JIRA)
Akash Srivastava created MATH-1495:
--

 Summary: Calling NaturalRanking#rank() on an array of all NaNs 
throws a misleading ArrayIndexOutOfBoundsException when the NanStrategy is 
REMOVED
 Key: MATH-1495
 URL: https://issues.apache.org/jira/browse/MATH-1495
 Project: Commons Math
  Issue Type: Bug
Reporter: Akash Srivastava


Consider the following code:
{code:java}
import org.apache.commons.math3.stat.ranking.NaNStrategy;
import org.apache.commons.math3.stat.ranking.NaturalRanking;
import org.apache.commons.math3.stat.ranking.TiesStrategy;
class AllNaNException{
public NaturalRanking naturalranking;
    public double[] AllNaNArray(){
    naturalranking = new NaturalRanking(NaNStrategy.REMOVED, 
TiesStrategy.AVERAGE);
        double[] x = {Double.NaN, Double.NaN};
        double[] y = naturalranking.rank(x);
        return y;
    }
    public static void main(String[] args) {
        AllNaNException a = new AllNaNException();
        double[] res = a.bug();
        System.out.println(res[0] + "," + res[1]);
    }

}
{code}
Compiled it by: javac -cp /usr/share/java/commons-math3-3.6.1.jar tryit.java

Executed it by: java -cp /usr/share/java/commons-math3-3.6.1.jar:. tryit

 

Output:
{code:java}
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.commons.math3.stat.ranking.NaturalRanking.rank(NaturalRanking.java:231)
at tryit.bug(tryit.java:9)
at tryit.main(tryit.java:14)
{code}
 

Currently, calling NaturalRanking#rank() on an array of all NaNs throws a 
misleading ArrayIndexOutOfBoundsException when the NanStrategy is REMOVED. I am 
unsure what outcome the user should expect in code like the test case I have 
provided below. Can you shed some light on this? I am happy to write a pull 
request once I know what fix would be best. I think throwing an 
IllegalArgumentException or returning an empty array would be more apt in this 
case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (EXEC-110) Two issues in the tutorial of commons-exec

2019-08-08 Thread Lei Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/EXEC-110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Wang updated EXEC-110:
--
Description: 
I found two obvious issues in the tutoral of commons-exec in the section of 
"Unblock Your Execution".

with the provided sample code of "commandLine.setSubstitutionMap(map);", the 
variable "commandLine" is different with "cmdLine" as abouve.
{code:java}
CommandLine cmdLine = new CommandLine("AcroRd32.exe");
cmdLine.addArgument("/p");
cmdLine.addArgument("/h");
cmdLine.addArgument("${file}");
HashMap map = new HashMap();
map.put("file", new File("invoice.pdf"));
commandLine.setSubstitutionMap(map);
{code}
 

the code "int exitValue = resultHandler.waitFor();" also has a mistake. because 
with the api definition, the method of "waitFor()" have the "void" return type, 
maybe you want to invoke "getExitValue()" which will return int type.
{code:java}
int exitValue = resultHandler.waitFor();
{code}
 

attach the links for your refarance. 
 [http://commons.apache.org/proper/commons-exec/tutorial.html]
 [http://commons.apache.org/proper/commons-exec/apidocs/index.html]

  was:
I found two obvious issues in the tutoral of commons-exec in the section of 
"Unblock Your Execution".

with the provided sample code of "commandLine.setSubstitutionMap(map);", the 
variable "commandLine" is different with "cmdLine" as abouve.

the code "int exitValue = resultHandler.waitFor();" also has a mistake. because 
with the api definition, the method of "waitFor()" have the "void" return type, 
maybe you want to invoke "getExitValue()" which will return int type.

attach the links for your refarance. 
http://commons.apache.org/proper/commons-exec/tutorial.html
http://commons.apache.org/proper/commons-exec/apidocs/index.html


> Two issues in the tutorial of commons-exec
> --
>
> Key: EXEC-110
> URL: https://issues.apache.org/jira/browse/EXEC-110
> Project: Commons Exec
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: Lei Wang
>Priority: Major
>
> I found two obvious issues in the tutoral of commons-exec in the section of 
> "Unblock Your Execution".
> with the provided sample code of "commandLine.setSubstitutionMap(map);", the 
> variable "commandLine" is different with "cmdLine" as abouve.
> {code:java}
> CommandLine cmdLine = new CommandLine("AcroRd32.exe");
> cmdLine.addArgument("/p");
> cmdLine.addArgument("/h");
> cmdLine.addArgument("${file}");
> HashMap map = new HashMap();
> map.put("file", new File("invoice.pdf"));
> commandLine.setSubstitutionMap(map);
> {code}
>  
> the code "int exitValue = resultHandler.waitFor();" also has a mistake. 
> because with the api definition, the method of "waitFor()" have the "void" 
> return type, maybe you want to invoke "getExitValue()" which will return int 
> type.
> {code:java}
> int exitValue = resultHandler.waitFor();
> {code}
>  
> attach the links for your refarance. 
>  [http://commons.apache.org/proper/commons-exec/tutorial.html]
>  [http://commons.apache.org/proper/commons-exec/apidocs/index.html]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (EXEC-110) Two issues in the tutorial of commons-exec

2019-08-08 Thread Lei Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/EXEC-110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Wang updated EXEC-110:
--
Description: 
I found two obvious issues in the tutoral of commons-exec in the section of 
"Unblock Your Execution".

with the provided sample code of "commandLine.setSubstitutionMap(map);", the 
variable "commandLine" is different with "cmdLine" as abouve.

the code "int exitValue = resultHandler.waitFor();" also has a mistake. because 
with the api definition, the method of "waitFor()" have the "void" return type, 
maybe you want to invoke "getExitValue()" which will return int type.

attach the links for your refarance. 
http://commons.apache.org/proper/commons-exec/tutorial.html
http://commons.apache.org/proper/commons-exec/apidocs/index.html

  was:
I found two obvious issues in the tutoral of commons-exec in the section of 
"Unblock Your Execution".
with the provided sample code of "commandLine.setSubstitutionMap(map);", the 
variable "commandLine" is different with "cmdLine" as abouve.
the code "int exitValue = resultHandler.waitFor();" also has a mistake. because 
with the api definition, the method of "waitFor()" have the "void" return type, 
maybe you want to invoke "getExitValue()" which will return int type.
attach the links for your refarance. 
http://commons.apache.org/proper/commons-exec/tutorial.html
http://commons.apache.org/proper/commons-exec/apidocs/index.html


> Two issues in the tutorial of commons-exec
> --
>
> Key: EXEC-110
> URL: https://issues.apache.org/jira/browse/EXEC-110
> Project: Commons Exec
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: Lei Wang
>Priority: Major
>
> I found two obvious issues in the tutoral of commons-exec in the section of 
> "Unblock Your Execution".
> with the provided sample code of "commandLine.setSubstitutionMap(map);", the 
> variable "commandLine" is different with "cmdLine" as abouve.
> the code "int exitValue = resultHandler.waitFor();" also has a mistake. 
> because with the api definition, the method of "waitFor()" have the "void" 
> return type, maybe you want to invoke "getExitValue()" which will return int 
> type.
> attach the links for your refarance. 
> http://commons.apache.org/proper/commons-exec/tutorial.html
> http://commons.apache.org/proper/commons-exec/apidocs/index.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (EXEC-110) Two issues in the tutorial of commons-exec

2019-08-08 Thread Lei Wang (JIRA)
Lei Wang created EXEC-110:
-

 Summary: Two issues in the tutorial of commons-exec
 Key: EXEC-110
 URL: https://issues.apache.org/jira/browse/EXEC-110
 Project: Commons Exec
  Issue Type: Bug
Affects Versions: 1.3
Reporter: Lei Wang


I found two obvious issues in the tutoral of commons-exec in the section of 
"Unblock Your Execution".
with the provided sample code of "commandLine.setSubstitutionMap(map);", the 
variable "commandLine" is different with "cmdLine" as abouve.
the code "int exitValue = resultHandler.waitFor();" also has a mistake. because 
with the api definition, the method of "waitFor()" have the "void" return type, 
maybe you want to invoke "getExitValue()" which will return int type.
attach the links for your refarance. 
http://commons.apache.org/proper/commons-exec/tutorial.html
http://commons.apache.org/proper/commons-exec/apidocs/index.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (EMAIL-189) if Korean language is included in attachment file name, attachment file name is broken.

2019-08-08 Thread Lee Jun Gyun (JIRA)


[ 
https://issues.apache.org/jira/browse/EMAIL-189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903504#comment-16903504
 ] 

Lee Jun Gyun edited comment on EMAIL-189 at 8/9/19 2:21 AM:


when korean language is included in attachment file name, file name is broken. 
If commons-email.jar not encode file name, file name is normal. I think that 
because javax.mail-api.jar handles file name properly.

I guess without looking closely at javamail source. After commons-email.jar 
handles file name included korean language, when javax.mail-api.jar handles 
file name, I think there is a problem.

I posted a similar question on stackoverflow. It may be good to reference. 
 
[https://stackoverflow.com/questions/57388873/why-attachment-file-name-broken-when-using-the-apache-email-library/57422337]

I think it would be nice to improve something like this:
{code:java}
public MultiPartEmail attach(
final DataSource ds,
String name,
final String description,
final String disposition,
boolean isEncode)
throws EmailException
{
if (EmailUtils.isEmpty(name))
{
name = ds.getName();
}
final BodyPart bodyPart = createBodyPart();
try
{
bodyPart.setDisposition(disposition);
if (isEncode) 
{
bodyPart.setFileName(MimeUtility.encodeText(name));
} 
else 
{
bodyPart.setFileName(name);
}
bodyPart.setDescription(description);
bodyPart.setDataHandler(new DataHandler(ds));

getContainer().addBodyPart(bodyPart);

...
...
{code}


was (Author: simple):
when korean language is included in attachment file name, file name is broken. 
If commons-email.jar not encode file name, file name is normal. I think that 
because javax.mail-api.jar handles file name properly.

I guess without looking closely at javamail source. After commons-email.jar 
handles file name included korean language, when javax.mail-api.jar handles 
file name, I think there is a problem.

I posted a similar question on stackoverflow. It may be good to reference. 
 
[https://stackoverflow.com/questions/57388873/why-attachment-file-name-broken-when-using-the-apache-email-library/57422337#57422337]

I think it would be nice to improve something like this:
{code:java}
public MultiPartEmail attach(
final DataSource ds,
String name,
final String description,
final String disposition,
boolean isEncode)
throws EmailException
{
if (EmailUtils.isEmpty(name))
{
name = ds.getName();
}
final BodyPart bodyPart = createBodyPart();
try
{
bodyPart.setDisposition(disposition);
if (isEncode) 
{
bodyPart.setFileName(MimeUtility.encodeText(name));
} 
else 
{
bodyPart.setFileName(name);
}
bodyPart.setDescription(description);
bodyPart.setDataHandler(new DataHandler(ds));

getContainer().addBodyPart(bodyPart);

...
...
{code}

> if Korean language is included in attachment file name, attachment file name 
> is broken.
> ---
>
> Key: EMAIL-189
> URL: https://issues.apache.org/jira/browse/EMAIL-189
> Project: Commons Email
>  Issue Type: Bug
>Reporter: Lee Jun Gyun
>Priority: Major
> Fix For: 1.5
>
>
>  
> if Korean language is included in attachment file name, attachment file name 
> is broken.
> ex) 
> [Attachment name]
> *attachment name :* 
> 한글testfile한글한글한글한글testfile한글한글한글한글testfile한글한글한글한글testfile한글한글한글한글testfile한글한글한글한글testfile한글한글한글.txt
> *actual received attachment name in gmail :* 
> =_UTF-8_Q_=ED=95=9C=EA=B8=80testfile=ED=95=9C=EA=B8=80_= 
> =_UTF-8_Q_=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80testfil_= 
> =_UTF-8_Q_e=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=E___ 
> ___filename_3=__D=95=9.txt_=
> [Debug mode]
> Content-Type: text/plain; charset=UTF-8; 
>   name*0="=?UTF-8?Q?=ED=95=9C=EA=B8=80testfile=ED=95=9C=EA=B8=80?=
>  =?U"; 
>   name*1="TF-8?Q?=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=8"; 
>   name*2="0testfil?=
>  =?UTF-8?Q?e=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=E"; 
>   

[jira] [Commented] (EMAIL-189) if Korean language is included in attachment file name, attachment file name is broken.

2019-08-08 Thread Lee Jun Gyun (JIRA)


[ 
https://issues.apache.org/jira/browse/EMAIL-189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903504#comment-16903504
 ] 

Lee Jun Gyun commented on EMAIL-189:


when korean language is included in attachment file name, file name is broken. 
If commons-email.jar not encode file name, file name is normal. I think that 
because javax.mail-api.jar handles file name properly.

I guess without looking closely at javamail source. After commons-email.jar 
handles file name included korean language, when javax.mail-api.jar handles 
file name, I think there is a problem.

I posted a similar question on stackoverflow. It may be good to reference. 
 
[https://stackoverflow.com/questions/57388873/why-attachment-file-name-broken-when-using-the-apache-email-library/57422337#57422337]

I think it would be nice to improve something like this:
{code:java}
public MultiPartEmail attach(
final DataSource ds,
String name,
final String description,
final String disposition,
boolean isEncode)
throws EmailException
{
if (EmailUtils.isEmpty(name))
{
name = ds.getName();
}
final BodyPart bodyPart = createBodyPart();
try
{
bodyPart.setDisposition(disposition);
if (isEncode) 
{
bodyPart.setFileName(MimeUtility.encodeText(name));
} 
else 
{
bodyPart.setFileName(name);
}
bodyPart.setDescription(description);
bodyPart.setDataHandler(new DataHandler(ds));

getContainer().addBodyPart(bodyPart);

...
...
{code}

> if Korean language is included in attachment file name, attachment file name 
> is broken.
> ---
>
> Key: EMAIL-189
> URL: https://issues.apache.org/jira/browse/EMAIL-189
> Project: Commons Email
>  Issue Type: Bug
>Reporter: Lee Jun Gyun
>Priority: Major
> Fix For: 1.5
>
>
>  
> if Korean language is included in attachment file name, attachment file name 
> is broken.
> ex) 
> [Attachment name]
> *attachment name :* 
> 한글testfile한글한글한글한글testfile한글한글한글한글testfile한글한글한글한글testfile한글한글한글한글testfile한글한글한글한글testfile한글한글한글.txt
> *actual received attachment name in gmail :* 
> =_UTF-8_Q_=ED=95=9C=EA=B8=80testfile=ED=95=9C=EA=B8=80_= 
> =_UTF-8_Q_=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80testfil_= 
> =_UTF-8_Q_e=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=E___ 
> ___filename_3=__D=95=9.txt_=
> [Debug mode]
> Content-Type: text/plain; charset=UTF-8; 
>   name*0="=?UTF-8?Q?=ED=95=9C=EA=B8=80testfile=ED=95=9C=EA=B8=80?=
>  =?U"; 
>   name*1="TF-8?Q?=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=8"; 
>   name*2="0testfil?=
>  =?UTF-8?Q?e=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=E"; 
>   name*3="D=95=9C?= =?UTF-8?Q?=EA=B8=80=ED=95=9C=EA=B8=80tes?=
>  =?UTF-8"; 
>   name*4="?Q?tfile=ED=95=9C?=
>  =?UTF-8?Q?=EA=B8=80=ED=95=9C=EA=B8=80=ED"; 
>   name*5="=95=9C=EA=B8=80=ED=95=9C=EA=B8=80?=
>  =?UTF-8?Q?testfile=ED=95"; 
>   name*6="=9C=EA=B8=80=ED=95=9C=EA=B8=80?=
>  =?UTF-8?Q?=ED=95=9C=EA=B8=8"; 
>   name*7="0=ED=95=9C=EA=B8=80testfile=ED=95=9C?=
>  =?UTF-8?Q?=EA=B8=80=E"; 
>   name*8="D=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=ED=95=9C?=
>  =?UTF-8?Q?=EA="; 
>   name*9="B8=80testf?=
>  =?UTF-8?Q?ile=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8="; 
>   name*10="80=ED=95=9C=EA=B8=80.txt?="
> Content-Transfer-Encoding: base64
> Content-Disposition: attachment; 
>   filename*0="=?UTF-8?Q?=ED=95=9C=EA=B8=80testfile=ED=95=9C=EA=B8=80?=
>  =?U"; 
>   
> filename*1="TF-8?Q?=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=8"; 
>   filename*2="0testfil?=
>  =?UTF-8?Q?e=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=E"; 
>   filename*3="D=95=9C?= =?UTF-8?Q?=EA=B8=80=ED=95=9C=EA=B8=80tes?=
>  =?UTF-8"; 
>   filename*4="?Q?tfile=ED=95=9C?=
>  =?UTF-8?Q?=EA=B8=80=ED=95=9C=EA=B8=80=ED"; 
>   filename*5="=95=9C=EA=B8=80=ED=95=9C=EA=B8=80?=
>  =?UTF-8?Q?testfile=ED=95"; 
>   filename*6="=9C=EA=B8=80=ED=95=9C=EA=B8=80?=
>  =?UTF-8?Q?=ED=95=9C=EA=B8=8"; 
>   filename*7="0=ED=95=9C=EA=B8=80testfile=ED=95=9C?=
>  =?UTF-8?Q?=EA=B8=80=E"; 
>   filename*8="D=95=9C=EA=B8=80=ED=95=9C=EA=B8=80=ED=95=9C?=
>  =?UTF-8?Q?=EA="; 
>   filename*9="B8=80testf?=
>  =?UTF-8?Q?ile=ED=95=9C=EA=B8=80=ED=95=9C=EA=B8="; 
>   filename*10="80=ED=95=9C=EA=B8=80.txt?="
> When I analyzed the cause, it works normally if MimeUtility.encodeText is not 
> called.
> ex)
> [not working]
> MultiPartEmail Line 467 bodyPart.setF

[jira] [Closed] (IO-614) Add classes TaggedWriter, ClosedWriter and BrokenWriter. #86

2019-08-08 Thread Gary Gregory (JIRA)


 [ 
https://issues.apache.org/jira/browse/IO-614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory closed IO-614.
---
   Resolution: Fixed
 Assignee: Gary Gregory
Fix Version/s: 2.7

In git master.

> Add classes TaggedWriter, ClosedWriter and BrokenWriter. #86
> 
>
> Key: IO-614
> URL: https://issues.apache.org/jira/browse/IO-614
> Project: Commons IO
>  Issue Type: New Feature
>Reporter: Gary Gregory
>Assignee: Gary Gregory
>Priority: Major
> Fix For: 2.7
>
>
> Add classes TaggedWriter, ClosedWriter and BrokenWriter. #86



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IO-614) Add classes TaggedWriter, ClosedWriter and BrokenWriter. #86

2019-08-08 Thread Gary Gregory (JIRA)
Gary Gregory created IO-614:
---

 Summary: Add classes TaggedWriter, ClosedWriter and BrokenWriter. 
#86
 Key: IO-614
 URL: https://issues.apache.org/jira/browse/IO-614
 Project: Commons IO
  Issue Type: New Feature
Reporter: Gary Gregory


Add classes TaggedWriter, ClosedWriter and BrokenWriter. #86



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-io] garydgregory merged pull request #86: Added TaggedWriter, ClosedWriter and BrokenWriter.

2019-08-08 Thread GitBox
garydgregory merged pull request #86: Added TaggedWriter, ClosedWriter and 
BrokenWriter.
URL: https://github.com/apache/commons-io/pull/86
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NUMBERS-99) Fraction.add(int) and Fraction.subtract(int) ignore risk of integer overflow

2019-08-08 Thread Heinrich Bohne (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903381#comment-16903381
 ] 

Heinrich Bohne commented on NUMBERS-99:
---

You're right, this issue is still open. However, the problem is that I found a 
gaping hole in the public contract of the {{Fraction}} class: Since the 
positivity of the denominator is an implementation detail, the exact 
circumstances under which the factory method {{of(int, int)}} will throw an 
{{ArithmeticException}} remain a mystery to the user, which means that this 
factory method is essentially unpredictable when only going by the 
specification and not the implementation.

I thought that this might be solvable by simply _not_ requiring the stored 
denominator to be positive, but this would probably break a lot of 
functionality whose implementation would have to be adjusted, so this should be 
discussed on the dev mailing list. But since the resolution of this bug would 
also depend on what course of action will be taken with regard to the issue I 
just mentioned, I thought it might be better to leave this issue open, for now.

> Fraction.add(int) and Fraction.subtract(int) ignore risk of integer overflow
> 
>
> Key: NUMBERS-99
> URL: https://issues.apache.org/jira/browse/NUMBERS-99
> Project: Commons Numbers
>  Issue Type: Bug
>  Components: fraction
>Reporter: Heinrich Bohne
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The methods {{add(int)}} and {{subtract(int)}} in the class 
> {{org.apache.commons.numbers.fraction.Fraction}} do not take into account the 
> risk of an integer overflow. For example, (2​^31^ - 1)/2 + 1 = (2​^31^ + 
> 1)/2, so the numerator overflows an {{int}}, but when calculated with 
> {{Fraction.add(int)}}, the method still returns normally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (NUMBERS-120) Major loss of precision in BigFraction.doubleValue() and BigFraction.floatValue()

2019-08-08 Thread Heinrich Bohne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NUMBERS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heinrich Bohne resolved NUMBERS-120.

   Resolution: Fixed
Fix Version/s: 1.0

> Major loss of precision in BigFraction.doubleValue() and 
> BigFraction.floatValue()
> -
>
> Key: NUMBERS-120
> URL: https://issues.apache.org/jira/browse/NUMBERS-120
> Project: Commons Numbers
>  Issue Type: Bug
>  Components: fraction
>Affects Versions: 1.0
>Reporter: Heinrich Bohne
>Priority: Minor
> Fix For: 1.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The method {{BigFraction.doubleValue()}} calculates the double value of 
> fractions with numerators or denominators that, when converted to a 
> {{double}}, round up to {{Double.POSITIVE_INFINITY}}, by right-shifting both 
> the numerator and denominator synchronously until both numbers fit into 1023 
> bits. Apart from the fact that the maximum number of bits an integer 
> representable as a finite {{double}} can have is 1024 (an unbiased exponent 
> of 1023, which is the largest possible unbiased exponent of a {{double}} 
> number, means 1. ⋅ 2^1023^, which amounts to 1024 bits), this way of 
> converting the fraction to a {{double}} is incredibly wasteful with precision 
> if the numerator and denominator have a different bit length, because the 
> smaller of the two numbers will be truncated beyond what is necessary to 
> represent it as a finite {{double}}. Here is an extreme example:
> The smallest integer that rounds up to {{Double.POSITIVE_INFINITY}} when 
> converted to a {{double}} is 2^1024^ - 2^970^. This is because 
> {{Double.MAX_VALUE}} as an integer is a 1024-bit number with the most 
> significant 53 bits set to 1 and all other 971 bits set to 0. If the 970 
> least significant bits are changed in any way, the number will still round 
> down to {{Double.MAX_VALUE}} as long as the 971st bit remains 0, but as soon 
> as the 971st bit is set to 1, the number will round up to 
> {{Double.POSITIVE_INFINITY}}.
> The smallest possible denominator greater than 1 where a single right-shift 
> will cause a loss of precision is 3. 2^1024^ - 2^970^ is divisible by 3, so 
> in order to create an irreducible fraction, let's add 1 to it:
> (2^1024^ - 2^970^ + 1) / 3 ≈ 5.992310449541053 ⋅ 10^307^ (which can be 
> verified with {{BigDecimal}}, or, more easily, with [this online 
> tool|https://www.wolframalpha.com/input/?i=(2%5E1024+-+2%5E970+%2B+1)+%2F+3]. 
> However, the current implementation of BigFraction.doubleValue() returns 
> 8.98846567431158 ⋅ 10^307^, which differs from the correct result by a 
> relative error of 50%! The same problem applies to the method 
> {{BigFraction.floatValue()}}.
> This can be prevented by truncating the numerator and denominator separately, 
> so that for each of the two numbers, the maximum possible precision is 
> retained.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (NUMBERS-132) ArithmeticUtils.gcd(int, int) can be simplified by performing the gcd algorithm on negative numbers

2019-08-08 Thread Heinrich Bohne (JIRA)


 [ 
https://issues.apache.org/jira/browse/NUMBERS-132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heinrich Bohne resolved NUMBERS-132.

   Resolution: Fixed
Fix Version/s: 1.0

> ArithmeticUtils.gcd(int, int) can be simplified by performing the gcd 
> algorithm on negative numbers
> ---
>
> Key: NUMBERS-132
> URL: https://issues.apache.org/jira/browse/NUMBERS-132
> Project: Commons Numbers
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.0
>Reporter: Heinrich Bohne
>Priority: Minor
> Fix For: 1.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The method {{ArithmeticUtils.gcd(int, int)}} currently handles the special 
> case of the non-negatable {{Integer.MIN_VALUE}} by converting the arguments 
> to {{long}}s if one of them is {{Integer.MIN_VALUE}} and performing two 
> iterations of the regular euclidean algorithm before handing the resulting 
> values over to a helper method that performs the binary gcd algorithm.
> However, the tactic used by {{gcd(long, long)}} is much more elegant: It just 
> converts positive arguments to their negative counterparts, thereby avoiding 
> the risk of overflow completely without having to make exceptions for special 
> cases and resorting to other data types.
> The method {{gcd(int, int)}} would likely be much more compact if it also 
> were to apply this technique.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-123) "BigFraction(double)" is unnecessary

2019-08-08 Thread Heinrich Bohne (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903364#comment-16903364
 ] 

Heinrich Bohne commented on NUMBERS-123:


Sorry for the confusion, the "no" was referring to my perceived need to discuss 
this matter further, so if this is the determining factor, then "yes", the 
issue can be marked as resolved :)

> "BigFraction(double)" is unnecessary
> 
>
> Key: NUMBERS-123
> URL: https://issues.apache.org/jira/browse/NUMBERS-123
> Project: Commons Numbers
>  Issue Type: Improvement
>  Components: fraction
>Reporter: Gilles
>Assignee: Gilles
>Priority: Trivial
> Fix For: 1.0
>
> Attachments: NUMBERS-123__Javadoc.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Constructor {{BigFraction(double value)}} is only called from the 
> {{from(double value)}} method.
>  Actually, this constructor is misleading as it is indeed primarily a 
> conversion from which appropriate {{numerator}} and {{denominator}} fields 
> are computed; those could be set by
>  the "direct" constructor {{BigFraction(BigInteger num, BigInteger den)}}.
> Moreover, the private field {{ZERO}} goes through this conversion code 
> whereas it could constructed "directly", e.g. using {{of(0)}}. Similarly for 
> field {{ONE}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-123) "BigFraction(double)" is unnecessary

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903361#comment-16903361
 ] 

Gilles commented on NUMBERS-123:


bq. then no

As in "This issue must stay open"?

bq. open a separate JIRA ticket

Thus, as in "This issue can be resolved"?

> "BigFraction(double)" is unnecessary
> 
>
> Key: NUMBERS-123
> URL: https://issues.apache.org/jira/browse/NUMBERS-123
> Project: Commons Numbers
>  Issue Type: Improvement
>  Components: fraction
>Reporter: Gilles
>Assignee: Gilles
>Priority: Trivial
> Fix For: 1.0
>
> Attachments: NUMBERS-123__Javadoc.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Constructor {{BigFraction(double value)}} is only called from the 
> {{from(double value)}} method.
>  Actually, this constructor is misleading as it is indeed primarily a 
> conversion from which appropriate {{numerator}} and {{denominator}} fields 
> are computed; those could be set by
>  the "direct" constructor {{BigFraction(BigInteger num, BigInteger den)}}.
> Moreover, the private field {{ZERO}} goes through this conversion code 
> whereas it could constructed "directly", e.g. using {{of(0)}}. Similarly for 
> field {{ONE}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-133) Speed up Primes.nextPrime(int)

2019-08-08 Thread Heinrich Bohne (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903358#comment-16903358
 ] 

Heinrich Bohne commented on NUMBERS-133:


I suppose you mean the modular congruence operators ≡ and ≢? Yes, sure. So 
should MathJax also be used for documentation of {{private}} or package-private 
elements? Because if that's the case, I'll need to update the documentation in 
[PR #66|https://github.com/apache/commons-numbers/pull/66], where I used html 
tags instead of MathJax. The reason I didn't use MathJax in {{private}} or 
package-private documentation is that, as far as I understand, IDEs generally 
don't render the MathJax, so the documentation is unlikely to be read by anyone 
in a format where the MathJax will be correctly displayed. But I can convert 
the documentation to MathJax, no problem.

> Speed up Primes.nextPrime(int)
> --
>
> Key: NUMBERS-133
> URL: https://issues.apache.org/jira/browse/NUMBERS-133
> Project: Commons Numbers
>  Issue Type: Improvement
>  Components: primes
>Affects Versions: 1.0
>Reporter: Heinrich Bohne
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The method {{Primes.nextPrime(int)}} can use the same algorithm to skip 
> multiples of certain primes as {{SmallPrimes.boundedTrialDivision(int, int, 
> List)}} uses, instead of hard-coding the alternating increment of 
> the trial candidate into a loop.
> Also, if the argument of the method is smaller than or equal to the 512th 
> prime number, the method can just infer the next higher prime number directly 
> from the array {{SmallPrimes.PRIMES}} without performing any calculations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-123) "BigFraction(double)" is unnecessary

2019-08-08 Thread Heinrich Bohne (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903341#comment-16903341
 ] 

Heinrich Bohne commented on NUMBERS-123:


If you mean whether I want to discuss the matter of constructor vs. factory 
method further, then no, I realize there are certain advantages that factory 
methods have over constructors. Should we, at some point in the future, find 
that the most pressing issue in this project at that time is that the 
validation and the reduction of the numerator and denominator is performed 
(whether implicitly or explicitly) both in the factory methods _and_ in the 
constructor, we can always open a separate JIRA ticket for that :D

> "BigFraction(double)" is unnecessary
> 
>
> Key: NUMBERS-123
> URL: https://issues.apache.org/jira/browse/NUMBERS-123
> Project: Commons Numbers
>  Issue Type: Improvement
>  Components: fraction
>Reporter: Gilles
>Assignee: Gilles
>Priority: Trivial
> Fix For: 1.0
>
> Attachments: NUMBERS-123__Javadoc.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Constructor {{BigFraction(double value)}} is only called from the 
> {{from(double value)}} method.
>  Actually, this constructor is misleading as it is indeed primarily a 
> conversion from which appropriate {{numerator}} and {{denominator}} fields 
> are computed; those could be set by
>  the "direct" constructor {{BigFraction(BigInteger num, BigInteger den)}}.
> Moreover, the private field {{ZERO}} goes through this conversion code 
> whereas it could constructed "directly", e.g. using {{of(0)}}. Similarly for 
> field {{ONE}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-beanutils] ricardovdbroek commented on issue #7: BEANUTILS-520: Mitigate CVE-2014-0114 by enabling SuppressPropertiesB…

2019-08-08 Thread GitBox
ricardovdbroek commented on issue #7: BEANUTILS-520: Mitigate CVE-2014-0114 by 
enabling SuppressPropertiesB…
URL: https://github.com/apache/commons-beanutils/pull/7#issuecomment-519625394
 
 
   I noticed this fix was merged into version 1.x and is part of the 1.9.4 tag. 
What's usually the timeline for getting a new version in maven?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-485) Reproducible Builds: keep entries order when gathering ScatterZipOutputStream content in ParallelScatterZipCreator

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-485?focusedWorklogId=291477&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291477
 ]

ASF GitHub Bot logged work on COMPRESS-485:
---

Author: ASF GitHub Bot
Created on: 08/Aug/19 17:56
Start Date: 08/Aug/19 17:56
Worklog Time Spent: 10m 
  Work Description: bodewig commented on issue #79: Pull/78  COMPRESS-485 + 
Substituting 'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519623961
 
 
   s/non-blocking/lock-free/g :man_facepalming: 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291477)
Time Spent: 4h 20m  (was: 4h 10m)

> Reproducible Builds: keep entries order when gathering ScatterZipOutputStream 
> content in ParallelScatterZipCreator
> --
>
> Key: COMPRESS-485
> URL: https://issues.apache.org/jira/browse/COMPRESS-485
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Hervé Boutemy
>Priority: Major
> Fix For: 1.19
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> currently, zip files created using ParallelScatterZipCreator have random 
> order.
> This is causing issues when trying to do Reproducible Builds with Maven 
> MNG-6276
> Studying ParallelScatterZipCreator, entries are kept sorted in memory in 
> futures list: instead of writing each full scatter in sequence, iterating 
> over futures should permit to write each zip entry in original order, without 
> changing the API or any performance of the gathering process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-compress] bodewig commented on issue #79: Pull/78 COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe collections 'ConcurrentLinkedDeque' and iterators.

2019-08-08 Thread GitBox
bodewig commented on issue #79: Pull/78  COMPRESS-485 + Substituting 
'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519623961
 
 
   s/non-blocking/lock-free/g :man_facepalming: 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (NUMBERS-120) Major loss of precision in BigFraction.doubleValue() and BigFraction.floatValue()

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/NUMBERS-120?focusedWorklogId=291442&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291442
 ]

ASF GitHub Bot logged work on NUMBERS-120:
--

Author: ASF GitHub Bot
Created on: 08/Aug/19 17:30
Start Date: 08/Aug/19 17:30
Worklog Time Spent: 10m 
  Work Description: asfgit commented on pull request #63: [NUMBERS-120] 
Maximize precision of doubleValue() and floatValue() in BigFraction
URL: https://github.com/apache/commons-numbers/pull/63
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291442)
Time Spent: 1h 10m  (was: 1h)

> Major loss of precision in BigFraction.doubleValue() and 
> BigFraction.floatValue()
> -
>
> Key: NUMBERS-120
> URL: https://issues.apache.org/jira/browse/NUMBERS-120
> Project: Commons Numbers
>  Issue Type: Bug
>  Components: fraction
>Affects Versions: 1.0
>Reporter: Heinrich Bohne
>Priority: Minor
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The method {{BigFraction.doubleValue()}} calculates the double value of 
> fractions with numerators or denominators that, when converted to a 
> {{double}}, round up to {{Double.POSITIVE_INFINITY}}, by right-shifting both 
> the numerator and denominator synchronously until both numbers fit into 1023 
> bits. Apart from the fact that the maximum number of bits an integer 
> representable as a finite {{double}} can have is 1024 (an unbiased exponent 
> of 1023, which is the largest possible unbiased exponent of a {{double}} 
> number, means 1. ⋅ 2^1023^, which amounts to 1024 bits), this way of 
> converting the fraction to a {{double}} is incredibly wasteful with precision 
> if the numerator and denominator have a different bit length, because the 
> smaller of the two numbers will be truncated beyond what is necessary to 
> represent it as a finite {{double}}. Here is an extreme example:
> The smallest integer that rounds up to {{Double.POSITIVE_INFINITY}} when 
> converted to a {{double}} is 2^1024^ - 2^970^. This is because 
> {{Double.MAX_VALUE}} as an integer is a 1024-bit number with the most 
> significant 53 bits set to 1 and all other 971 bits set to 0. If the 970 
> least significant bits are changed in any way, the number will still round 
> down to {{Double.MAX_VALUE}} as long as the 971st bit remains 0, but as soon 
> as the 971st bit is set to 1, the number will round up to 
> {{Double.POSITIVE_INFINITY}}.
> The smallest possible denominator greater than 1 where a single right-shift 
> will cause a loss of precision is 3. 2^1024^ - 2^970^ is divisible by 3, so 
> in order to create an irreducible fraction, let's add 1 to it:
> (2^1024^ - 2^970^ + 1) / 3 ≈ 5.992310449541053 ⋅ 10^307^ (which can be 
> verified with {{BigDecimal}}, or, more easily, with [this online 
> tool|https://www.wolframalpha.com/input/?i=(2%5E1024+-+2%5E970+%2B+1)+%2F+3]. 
> However, the current implementation of BigFraction.doubleValue() returns 
> 8.98846567431158 ⋅ 10^307^, which differs from the correct result by a 
> relative error of 50%! The same problem applies to the method 
> {{BigFraction.floatValue()}}.
> This can be prevented by truncating the numerator and denominator separately, 
> so that for each of the two numbers, the maximum possible precision is 
> retained.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (NUMBERS-132) ArithmeticUtils.gcd(int, int) can be simplified by performing the gcd algorithm on negative numbers

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/NUMBERS-132?focusedWorklogId=291441&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291441
 ]

ASF GitHub Bot logged work on NUMBERS-132:
--

Author: ASF GitHub Bot
Created on: 08/Aug/19 17:30
Start Date: 08/Aug/19 17:30
Worklog Time Spent: 10m 
  Work Description: asfgit commented on pull request #67: NUMBERS-132: 
Perform gcd algorithm on negative numbers in ArithmeticUtils.gcd(int, int)
URL: https://github.com/apache/commons-numbers/pull/67
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291441)
Time Spent: 0.5h  (was: 20m)

> ArithmeticUtils.gcd(int, int) can be simplified by performing the gcd 
> algorithm on negative numbers
> ---
>
> Key: NUMBERS-132
> URL: https://issues.apache.org/jira/browse/NUMBERS-132
> Project: Commons Numbers
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.0
>Reporter: Heinrich Bohne
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The method {{ArithmeticUtils.gcd(int, int)}} currently handles the special 
> case of the non-negatable {{Integer.MIN_VALUE}} by converting the arguments 
> to {{long}}s if one of them is {{Integer.MIN_VALUE}} and performing two 
> iterations of the regular euclidean algorithm before handing the resulting 
> values over to a helper method that performs the binary gcd algorithm.
> However, the tactic used by {{gcd(long, long)}} is much more elegant: It just 
> converts positive arguments to their negative counterparts, thereby avoiding 
> the risk of overflow completely without having to make exceptions for special 
> cases and resorting to other data types.
> The method {{gcd(int, int)}} would likely be much more compact if it also 
> were to apply this technique.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-numbers] asfgit closed pull request #63: [NUMBERS-120] Maximize precision of doubleValue() and floatValue() in BigFraction

2019-08-08 Thread GitBox
asfgit closed pull request #63: [NUMBERS-120] Maximize precision of 
doubleValue() and floatValue() in BigFraction
URL: https://github.com/apache/commons-numbers/pull/63
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [commons-numbers] asfgit closed pull request #67: NUMBERS-132: Perform gcd algorithm on negative numbers in ArithmeticUtils.gcd(int, int)

2019-08-08 Thread GitBox
asfgit closed pull request #67: NUMBERS-132: Perform gcd algorithm on negative 
numbers in ArithmeticUtils.gcd(int, int)
URL: https://github.com/apache/commons-numbers/pull/67
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (NUMBERS-103) Travis build fails every time based on usage of oraclejdk8

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903171#comment-16903171
 ] 

Gilles commented on NUMBERS-103:


PR was merged.  Can the issue be "resolved"?

> Travis build fails every time based on usage of oraclejdk8
> --
>
> Key: NUMBERS-103
> URL: https://issues.apache.org/jira/browse/NUMBERS-103
> Project: Commons Numbers
>  Issue Type: Bug
>Affects Versions: 1.0
>Reporter: Karl Heinz Marbaise
>Priority: Blocker
> Fix For: 1.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Problem*
> * The Travis build uses {{oraclejdk8}} which is not available on Travis.
> * Remove also {{sudo: false}} which is not supported anymore on Travis.
> *Goal*
> * Replace {{oraclejdk8}} with {{openjdk8}} which will succeed the build.
> * Remove {{sudo: false}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-92) Rename AbstractFormat to AbstractFractionFormat

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903169#comment-16903169
 ] 

Gilles commented on NUMBERS-92:
---

Seems obsoleted by NUMBERS-97.

> Rename AbstractFormat to AbstractFractionFormat
> ---
>
> Key: NUMBERS-92
> URL: https://issues.apache.org/jira/browse/NUMBERS-92
> Project: Commons Numbers
>  Issue Type: Improvement
>Reporter: Eric Barnhill
>Assignee: Eric Barnhill
>Priority: Minor
>
> FractionFormat and BigFractionFormat have an abstract class. It is named 
> AbstractFormat which is too general. Better to call it AbstractFractionFormat 
> I think.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-97) Remove commons-numbers-fraction *Format classes

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-97?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903165#comment-16903165
 ] 

Gilles commented on NUMBERS-97:
---

bq. Remove standalone Format classes

This seems to have been done.  Can this issue be "resolved"?

> Remove commons-numbers-fraction *Format classes
> ---
>
> Key: NUMBERS-97
> URL: https://issues.apache.org/jira/browse/NUMBERS-97
> Project: Commons Numbers
>  Issue Type: Task
>Reporter: Eric Barnhill
>Assignee: Eric Barnhill
>Priority: Minor
>
> Remove standalone Format classes in commons-numbers-fraction . 
> Replace this functionality with VALJO-related parse() and toString() methods 
> within Fraction and BigFraction



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-99) Fraction.add(int) and Fraction.subtract(int) ignore risk of integer overflow

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903161#comment-16903161
 ] 

Gilles commented on NUMBERS-99:
---

PR #34 was closed; will you provide another?

> Fraction.add(int) and Fraction.subtract(int) ignore risk of integer overflow
> 
>
> Key: NUMBERS-99
> URL: https://issues.apache.org/jira/browse/NUMBERS-99
> Project: Commons Numbers
>  Issue Type: Bug
>  Components: fraction
>Reporter: Heinrich Bohne
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The methods {{add(int)}} and {{subtract(int)}} in the class 
> {{org.apache.commons.numbers.fraction.Fraction}} do not take into account the 
> risk of an integer overflow. For example, (2​^31^ - 1)/2 + 1 = (2​^31^ + 
> 1)/2, so the numerator overflows an {{int}}, but when calculated with 
> {{Fraction.add(int)}}, the method still returns normally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (COMPRESS-490) [lz4] Multiple unchecked exceptions when decompressing malformed input

2019-08-08 Thread Stefan Bodewig (JIRA)


[ 
https://issues.apache.org/jira/browse/COMPRESS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903140#comment-16903140
 ] 

Stefan Bodewig commented on COMPRESS-490:
-

well, at least they have been vulnerable to negative sizes contained inside of 
the frame format, fixed as well.

> [lz4] Multiple unchecked exceptions when decompressing malformed input
> --
>
> Key: COMPRESS-490
> URL: https://issues.apache.org/jira/browse/COMPRESS-490
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.18
> Environment: MacOS Mojave
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
>Reporter: Alex Rebert
>Priority: Minor
> Fix For: 1.19
>
> Attachments: ArithmeticException, ArrayIndexOutOfBoundsException1, 
> ArrayIndexOutOfBoundsException2
>
>
> Encountered multiple unchecked exceptions thrown from 
> {{FramedLZ4CompressorInputStream.read}} when parsing malformed files.
> {{ArrayIndexOutOfBoundsException}} and {{ArithmeticException}} are unchecked 
> exceptions that are not documented in this API; therefore, such exceptions 
> can cause stability issues in applications that are not expecting them. 
> Instead, an {{IOException}} should be thrown indicating that the input stream 
> contains malformed data.
> Stack traces for three distinct (but possibly related) sources of exceptions 
> follow:
> {noformat}
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:314)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:308)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:304)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:75)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> The inputs were automatically generated by fuzzing, by repeatedly mutating 
> random bytes in a well-formed file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (COMPRESS-490) [lz4] Multiple unchecked exceptions when decompressing malformed input

2019-08-08 Thread Stefan Bodewig (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Bodewig resolved COMPRESS-490.
-
Resolution: Fixed
  Assignee: (was: Stefan Bodewig)

Many thanks. I think I've fixed all cases where we trusted information read 
from the stream for LZ4 and Snappy in master now, at least for the non-framed 
formats.

Will have a second look at the framed versions on top of that.

> [lz4] Multiple unchecked exceptions when decompressing malformed input
> --
>
> Key: COMPRESS-490
> URL: https://issues.apache.org/jira/browse/COMPRESS-490
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.18
> Environment: MacOS Mojave
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
>Reporter: Alex Rebert
>Priority: Minor
> Fix For: 1.19
>
> Attachments: ArithmeticException, ArrayIndexOutOfBoundsException1, 
> ArrayIndexOutOfBoundsException2
>
>
> Encountered multiple unchecked exceptions thrown from 
> {{FramedLZ4CompressorInputStream.read}} when parsing malformed files.
> {{ArrayIndexOutOfBoundsException}} and {{ArithmeticException}} are unchecked 
> exceptions that are not documented in this API; therefore, such exceptions 
> can cause stability issues in applications that are not expecting them. 
> Instead, an {{IOException}} should be thrown indicating that the input stream 
> contains malformed data.
> Stack traces for three distinct (but possibly related) sources of exceptions 
> follow:
> {noformat}
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:314)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:308)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:304)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:75)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> The inputs were automatically generated by fuzzing, by repeatedly mutating 
> random bytes in a well-formed file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (LOGGING-167) [Java-9][JPMS] automatic module cannot be used with jlink: commons.logging

2019-08-08 Thread Gary Gregory (JIRA)


[ 
https://issues.apache.org/jira/browse/LOGGING-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903120#comment-16903120
 ] 

Gary Gregory commented on LOGGING-167:
--

PRs are welcome on GitHub :-)

> [Java-9][JPMS] automatic module cannot be used with jlink: commons.logging
> --
>
> Key: LOGGING-167
> URL: https://issues.apache.org/jira/browse/LOGGING-167
> Project: Commons Logging
>  Issue Type: Bug
>Affects Versions: 1.2
> Environment: MacOS
> Oracle Linux
>  
>Reporter: Denis Makogon
>Priority: Major
>
> I'm building a modular project and one of my modules uses apache http client, 
> so, subsequently, my code depends on commons-logging and when I'm building 
> the jlink-ed JRE, I'm seeing this:
> ```
> [ERROR] Error: automatic module cannot be used with jlink: commons.logging 
> from 
> file:home/root/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar
> ```
> meaning, that commons-logging and few other modules (bits of http package) 
> are not java modules.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (LOGGING-167) [Java-9][JPMS] automatic module cannot be used with jlink: commons.logging

2019-08-08 Thread Denis Makogon (JIRA)
Denis Makogon created LOGGING-167:
-

 Summary: [Java-9][JPMS] automatic module cannot be used with 
jlink: commons.logging
 Key: LOGGING-167
 URL: https://issues.apache.org/jira/browse/LOGGING-167
 Project: Commons Logging
  Issue Type: Bug
Affects Versions: 1.2
 Environment: MacOS

Oracle Linux

 
Reporter: Denis Makogon


I'm building a modular project and one of my modules uses apache http client, 
so, subsequently, my code depends on commons-logging and when I'm building the 
jlink-ed JRE, I'm seeing this:
```
[ERROR] Error: automatic module cannot be used with jlink: commons.logging from 
file:home/root/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar
```

meaning, that commons-logging and few other modules (bits of http package) are 
not java modules.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-133) Speed up Primes.nextPrime(int)

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903106#comment-16903106
 ] 

Gilles commented on NUMBERS-133:


Could please replace special characters (in the Javadoc) by MathJax?  Thanks.

> Speed up Primes.nextPrime(int)
> --
>
> Key: NUMBERS-133
> URL: https://issues.apache.org/jira/browse/NUMBERS-133
> Project: Commons Numbers
>  Issue Type: Improvement
>  Components: primes
>Affects Versions: 1.0
>Reporter: Heinrich Bohne
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The method {{Primes.nextPrime(int)}} can use the same algorithm to skip 
> multiples of certain primes as {{SmallPrimes.boundedTrialDivision(int, int, 
> List)}} uses, instead of hard-coding the alternating increment of 
> the trial candidate into a loop.
> Also, if the argument of the method is smaller than or equal to the 512th 
> prime number, the method can just infer the next higher prime number directly 
> from the array {{SmallPrimes.PRIMES}} without performing any calculations.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (COMPRESS-490) [lz4] Multiple unchecked exceptions when decompressing malformed input

2019-08-08 Thread Alex Rebert (JIRA)


[ 
https://issues.apache.org/jira/browse/COMPRESS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903098#comment-16903098
 ] 

Alex Rebert commented on COMPRESS-490:
--

Yep! Feel free to use the inputs as you see fit. 

> [lz4] Multiple unchecked exceptions when decompressing malformed input
> --
>
> Key: COMPRESS-490
> URL: https://issues.apache.org/jira/browse/COMPRESS-490
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.18
> Environment: MacOS Mojave
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
>Reporter: Alex Rebert
>Assignee: Stefan Bodewig
>Priority: Minor
> Fix For: 1.19
>
> Attachments: ArithmeticException, ArrayIndexOutOfBoundsException1, 
> ArrayIndexOutOfBoundsException2
>
>
> Encountered multiple unchecked exceptions thrown from 
> {{FramedLZ4CompressorInputStream.read}} when parsing malformed files.
> {{ArrayIndexOutOfBoundsException}} and {{ArithmeticException}} are unchecked 
> exceptions that are not documented in this API; therefore, such exceptions 
> can cause stability issues in applications that are not expecting them. 
> Instead, an {{IOException}} should be thrown indicating that the input stream 
> contains malformed data.
> Stack traces for three distinct (but possibly related) sources of exceptions 
> follow:
> {noformat}
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:314)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:308)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:304)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:75)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> The inputs were automatically generated by fuzzing, by repeatedly mutating 
> random bytes in a well-formed file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (COMPRESS-490) [lz4] Multiple unchecked exceptions when decompressing malformed input

2019-08-08 Thread Stefan Bodewig (JIRA)


[ 
https://issues.apache.org/jira/browse/COMPRESS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903094#comment-16903094
 ] 

Stefan Bodewig commented on COMPRESS-490:
-

Alex can I include the archives within our code base as examples for our unit 
test? Or in legaleese are you contributing the archives under term 5 of the 
Apache License version 2.0 - [http://www.apache.org/licenses/LICENSE-2.0] ?

> [lz4] Multiple unchecked exceptions when decompressing malformed input
> --
>
> Key: COMPRESS-490
> URL: https://issues.apache.org/jira/browse/COMPRESS-490
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.18
> Environment: MacOS Mojave
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
>Reporter: Alex Rebert
>Assignee: Stefan Bodewig
>Priority: Minor
> Fix For: 1.19
>
> Attachments: ArithmeticException, ArrayIndexOutOfBoundsException1, 
> ArrayIndexOutOfBoundsException2
>
>
> Encountered multiple unchecked exceptions thrown from 
> {{FramedLZ4CompressorInputStream.read}} when parsing malformed files.
> {{ArrayIndexOutOfBoundsException}} and {{ArithmeticException}} are unchecked 
> exceptions that are not documented in this API; therefore, such exceptions 
> can cause stability issues in applications that are not expecting them. 
> Instead, an {{IOException}} should be thrown indicating that the input stream 
> contains malformed data.
> Stack traces for three distinct (but possibly related) sources of exceptions 
> follow:
> {noformat}
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:314)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:308)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:304)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:75)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> The inputs were automatically generated by fuzzing, by repeatedly mutating 
> random bytes in a well-formed file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (COMPRESS-490) [lz4] Multiple unchecked exceptions when decompressing malformed input

2019-08-08 Thread Stefan Bodewig (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Bodewig updated COMPRESS-490:

Assignee: Stefan Bodewig

> [lz4] Multiple unchecked exceptions when decompressing malformed input
> --
>
> Key: COMPRESS-490
> URL: https://issues.apache.org/jira/browse/COMPRESS-490
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.18
> Environment: MacOS Mojave
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
>Reporter: Alex Rebert
>Assignee: Stefan Bodewig
>Priority: Minor
> Fix For: 1.19
>
> Attachments: ArithmeticException, ArrayIndexOutOfBoundsException1, 
> ArrayIndexOutOfBoundsException2
>
>
> Encountered multiple unchecked exceptions thrown from 
> {{FramedLZ4CompressorInputStream.read}} when parsing malformed files.
> {{ArrayIndexOutOfBoundsException}} and {{ArithmeticException}} are unchecked 
> exceptions that are not documented in this API; therefore, such exceptions 
> can cause stability issues in applications that are not expecting them. 
> Instead, an {{IOException}} should be thrown indicating that the input stream 
> contains malformed data.
> Stack traces for three distinct (but possibly related) sources of exceptions 
> follow:
> {noformat}
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:314)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:308)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:304)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:75)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> The inputs were automatically generated by fuzzing, by repeatedly mutating 
> random bytes in a well-formed file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (COMPRESS-490) [lz4] Multiple unchecked exceptions when decompressing malformed input

2019-08-08 Thread Stefan Bodewig (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Bodewig updated COMPRESS-490:

Fix Version/s: 1.19

> [lz4] Multiple unchecked exceptions when decompressing malformed input
> --
>
> Key: COMPRESS-490
> URL: https://issues.apache.org/jira/browse/COMPRESS-490
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.18
> Environment: MacOS Mojave
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
>Reporter: Alex Rebert
>Priority: Minor
> Fix For: 1.19
>
> Attachments: ArithmeticException, ArrayIndexOutOfBoundsException1, 
> ArrayIndexOutOfBoundsException2
>
>
> Encountered multiple unchecked exceptions thrown from 
> {{FramedLZ4CompressorInputStream.read}} when parsing malformed files.
> {{ArrayIndexOutOfBoundsException}} and {{ArithmeticException}} are unchecked 
> exceptions that are not documented in this API; therefore, such exceptions 
> can cause stability issues in applications that are not expecting them. 
> Instead, an {{IOException}} should be thrown indicating that the input stream 
> contains malformed data.
> Stack traces for three distinct (but possibly related) sources of exceptions 
> follow:
> {noformat}
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:314)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:308)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: -1
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.tryToCopy(AbstractLZ77CompressorInputStream.java:304)
> at 
> org.apache.commons.compress.compressors.lz77support.AbstractLZ77CompressorInputStream.readBackReference(AbstractLZ77CompressorInputStream.java:291)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:83)
> at 
> org.apache.commons.compress.compressors.lz4.BlockLZ4CompressorInputStream.read(BlockLZ4CompressorInputStream.java:75)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.readOnce(FramedLZ4CompressorInputStream.java:328)
> at 
> org.apache.commons.compress.compressors.lz4.FramedLZ4CompressorInputStream.read(FramedLZ4CompressorInputStream.java:145)
> at java.io.InputStream.read(InputStream.java:101)
> {noformat}
> The inputs were automatically generated by fuzzing, by repeatedly mutating 
> random bytes in a well-formed file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (COMPRESS-488) ZipArchiveInputStream doesn't support .crx files (Chrome Extension)

2019-08-08 Thread Stefan Bodewig (JIRA)


[ 
https://issues.apache.org/jira/browse/COMPRESS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903043#comment-16903043
 ] 

Stefan Bodewig commented on COMPRESS-488:
-

I've added this as an extra short-coming to the documentation with 
[https://github.com/apache/commons-compress/commit/b2026d350372d091366b475c2d41d4292103ada7]

 

> ZipArchiveInputStream doesn't support .crx files (Chrome Extension)
> ---
>
> Key: COMPRESS-488
> URL: https://issues.apache.org/jira/browse/COMPRESS-488
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.18
> Environment: MacOS Mojave: 10.14.2 (18C54)
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Kyle Willett
>Priority: Major
> Attachments: Lighthouse_v5.0.0.crx
>
>
> Using this tool you can download chrome extensions to .crx files from the 
> chrome store.  
> [https://www.maketecheasier.com/download-save-chrome-extension/]  Attempting 
> to extract these zip files using ZipArchiveInputStream, I receive the 
> following error 
> {code:java}
> Unexpected record signature: 0X34327243{code}
> I attached an example .crx zip file
> VirusTotal link to prove its not malicious
>  
> [https://www.virustotal.com/#/file/337a648e7cf30e6938a3cfeb6fed5f429b91ccd4559da4016eb72f0656bf5873/detection]
> On mac and linux I am able to unzip these files using the basic unzip 
> binaries, for example:
> {code:java}
> 15:53:37 $ unzip Lighthouse_v5.0.0.crx -d unzipped/
> Archive:  Lighthouse_v5.0.0.crx
> warning [Lighthouse_v5.0.0.crx]:  566 extra bytes at beginning or within 
> zipfile
>   (attempting to process anyway)
>    creating: unzipped/_locales/
>    creating: unzipped/images/
>   inflating: unzipped/popup.html     
>   inflating: unzipped/manifest.json 
>    creating: unzipped/scripts/
>    creating: unzipped/styles/
>    creating: unzipped/_locales/en/
>   inflating: unzipped/images/fail.svg 
>   inflating: unzipped/images/lh_logo_bg.png 
>   inflating: unzipped/images/lh_logo_bg_no-light.png 
>   inflating: unzipped/images/lh_logo_canary_bg.png 
>   inflating: unzipped/images/lh_logo_canary_icon.png 
>   inflating: unzipped/images/lh_logo_icon.png 
>   inflating: unzipped/images/lh_logo_icon_light.png 
>   inflating: unzipped/images/pass.svg 
>   inflating: unzipped/images/verified.svg 
>   inflating: unzipped/scripts/lighthouse-ext-bundle.js 
>   inflating: unzipped/scripts/popup.js 
>   inflating: unzipped/styles/lighthouse.css 
>   inflating: unzipped/styles/lighthouse-loading.css 
>   inflating: unzipped/_locales/en/messages.json 
>   inflating: unzipped/_locales/en/messages_canary.json 
>    creating: unzipped/_metadata/
>   inflating: unzipped/_metadata/verified_contents.json {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (COMPRESS-489) Reading Central File Header and Archive extra data record, with out Skipping

2019-08-08 Thread Stefan Bodewig (JIRA)


[ 
https://issues.apache.org/jira/browse/COMPRESS-489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903035#comment-16903035
 ] 

Stefan Bodewig commented on COMPRESS-489:
-

{{ZipFile}} does read the central directory and provides all extra data that is 
part of the central directory. What you are asking for is the "may return 
incomplete extra field data" part of 
[http://commons.apache.org/proper/commons-compress/zip.html#ZipArchiveInputStream_vs_ZipFile]
 .

{{ZipArchiveInputStream}} reads the archive from the start and hands out 
{{ZipArchiveEntry}} instances as soon as it has parsed the local file header. 
Technically it would be possible to add the extra fields of the central 
directory in retrospect once the stream has reached the central directory. Most 
of our consumers will have discarded the {{ZipArchiveEntry}} instances by then 
already, though. This is why we've decided to not even try that - keep in mind 
this is only one of several of {{ZipArchiveInputStream}}'s flaws - but rater 
tell people to use {{ZipFile}} whenever possible instead.

Even {{ZipFile}} does not even try to parse the archive extra data as its 
contents seem to only be related to the PKWARE archive encryption feature that 
we are not allowed to implement - see section "Incorporating PKWARE Proprietary 
Technology into Your Product" in the current APPNOTE.txt.

 

 

> Reading Central File Header and Archive extra data record, with out Skipping
> 
>
> Key: COMPRESS-489
> URL: https://issues.apache.org/jira/browse/COMPRESS-489
> Project: Commons Compress
>  Issue Type: Improvement
>Reporter: Anvesh Mora
>Priority: Minor
> Attachments: image-2019-06-10-17-02-33-972.png
>
>
> - Some Zip files store extra data in CFH and AED. Right now it seems that we 
> are skipping the meta data instead of making it available to consumer and 
> choose to use it or not.
>  - It might sound small change but right now this kind of flexibility is not 
> allowed with inheritance, due to lot of private and package access specifiers 
> ( NOT open for extension)
>  - If extension is not something we are looking, providing a method which 
> allows to store CFH and AED data based on a flag, should make it work.
>  
> I had similar requirement in my current development and I had to re-write the 
> component and use below code-snippet in 
> ZipArchieveInputStream#getNextZipEntry method:
> !image-2019-06-10-17-02-33-972.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (COMPRESS-477) Support for splitted zip files

2019-08-08 Thread Stefan Bodewig (JIRA)


[ 
https://issues.apache.org/jira/browse/COMPRESS-477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16903023#comment-16903023
 ] 

Stefan Bodewig commented on COMPRESS-477:
-

As far as I know nobody is actively working on this feature.

But I'd at least like to explain the exception. Your archive contains a "Zip64 
end of central directory locator" (section 4.3.15 of the current appnote) which 
{{ZipFile}} uses to find the central directory. It does so by completely 
ignoring the "number of the disk" - as we only  support a single disk - and 
searching the current file for the "relative offset" - which is certainly wrong 
for your concatenated file.

You should be able to read your concatenated file using 
{{ZipArchiveInputStream}} just fine, though.

> Support for splitted zip files
> --
>
> Key: COMPRESS-477
> URL: https://issues.apache.org/jira/browse/COMPRESS-477
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Luis Filipe Nassif
>Priority: Major
>
> It would be very useful to support splitted zip files. I've read 
> [https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT] and understood 
> that simply concatenating the segments and removing the split signature 
> 0x08074b50 from first segment would be sufficient, but it is not that simple 
> because compress fails with exception below:
> {code}
> Caused by: java.util.zip.ZipException: archive's ZIP64 end of central 
> directory locator is corrupt.
>  at 
> org.apache.commons.compress.archivers.zip.ZipFile.positionAtCentralDirectory64(ZipFile.java:924)
>  ~[commons-compress-1.18.jar:1.18]
>  at 
> org.apache.commons.compress.archivers.zip.ZipFile.positionAtCentralDirectory(ZipFile.java:901)
>  ~[commons-compress-1.18.jar:1.18]
>  at 
> org.apache.commons.compress.archivers.zip.ZipFile.populateFromCentralDirectory(ZipFile.java:621)
>  ~[commons-compress-1.18.jar:1.18]
>  at 
> org.apache.commons.compress.archivers.zip.ZipFile.(ZipFile.java:295) 
> ~[commons-compress-1.18.jar:1.18]
>  at 
> org.apache.commons.compress.archivers.zip.ZipFile.(ZipFile.java:280) 
> ~[commons-compress-1.18.jar:1.18]
>  at 
> org.apache.commons.compress.archivers.zip.ZipFile.(ZipFile.java:236) 
> ~[commons-compress-1.18.jar:1.18]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-107) Cleanup build configuration - remove not used configurations

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902995#comment-16902995
 ] 

Gilles commented on NUMBERS-107:


The PR seems to only remove the Clirr config, and not add anything to replace 
it.  If I'm correct, it cannot be merged as is...

> Cleanup build configuration - remove not used configurations
> 
>
> Key: NUMBERS-107
> URL: https://issues.apache.org/jira/browse/NUMBERS-107
> Project: Commons Numbers
>  Issue Type: Improvement
>Affects Versions: 1.0
>Reporter: Karl Heinz Marbaise
>Priority: Trivial
> Fix For: 1.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Problem*
> * There are entries in {{pom.xml}} and resources directories which are not 
> used.
> ** Clirr configuration {{src/main/resources/clirr}}
> ** References in {{doc/release/release.howto.txt}}
> ** {{true}}
> *Goal*
> * Remove the configuration parts which are not used at all



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (NUMBERS-123) "BigFraction(double)" is unnecessary

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/NUMBERS-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902990#comment-16902990
 ] 

Gilles commented on NUMBERS-123:


Can this issue be considered "resolved"?

> "BigFraction(double)" is unnecessary
> 
>
> Key: NUMBERS-123
> URL: https://issues.apache.org/jira/browse/NUMBERS-123
> Project: Commons Numbers
>  Issue Type: Improvement
>  Components: fraction
>Reporter: Gilles
>Assignee: Gilles
>Priority: Trivial
> Fix For: 1.0
>
> Attachments: NUMBERS-123__Javadoc.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Constructor {{BigFraction(double value)}} is only called from the 
> {{from(double value)}} method.
>  Actually, this constructor is misleading as it is indeed primarily a 
> conversion from which appropriate {{numerator}} and {{denominator}} fields 
> are computed; those could be set by
>  the "direct" constructor {{BigFraction(BigInteger num, BigInteger den)}}.
> Moreover, the private field {{ZERO}} goes through this conversion code 
> whereas it could constructed "directly", e.g. using {{of(0)}}. Similarly for 
> field {{ONE}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-io] garydgregory commented on issue #88: Added TeeWriter.

2019-08-08 Thread GitBox
garydgregory commented on issue #88: Added TeeWriter.
URL: https://github.com/apache/commons-io/pull/88#issuecomment-519520538
 
 
   I will take a look over the weekend, I have an idea for a different 
implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [commons-compress] Tibor17 commented on issue #79: Pull/78 COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe collections 'ConcurrentLinkedDeque' and iterators.

2019-08-08 Thread GitBox
Tibor17 commented on issue #79: Pull/78  COMPRESS-485 + Substituting 
'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519511995
 
 
   @bodewig 
   I am thankful to you.
   btw, great that you mentioned Kristian, he is my colleague in Apache. We 
have one common friend ;-)
   Cheers Tibor


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-485) Reproducible Builds: keep entries order when gathering ScatterZipOutputStream content in ParallelScatterZipCreator

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-485?focusedWorklogId=291250&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291250
 ]

ASF GitHub Bot logged work on COMPRESS-485:
---

Author: ASF GitHub Bot
Created on: 08/Aug/19 13:12
Start Date: 08/Aug/19 13:12
Worklog Time Spent: 10m 
  Work Description: Tibor17 commented on issue #79: Pull/78  COMPRESS-485 + 
Substituting 'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519511995
 
 
   @bodewig 
   I am thankful to you.
   btw, great that you mentioned Kristian, he is my colleague in Apache. We 
have one common friend ;-)
   Cheers Tibor
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291250)
Time Spent: 4h 10m  (was: 4h)

> Reproducible Builds: keep entries order when gathering ScatterZipOutputStream 
> content in ParallelScatterZipCreator
> --
>
> Key: COMPRESS-485
> URL: https://issues.apache.org/jira/browse/COMPRESS-485
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Hervé Boutemy
>Priority: Major
> Fix For: 1.19
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> currently, zip files created using ParallelScatterZipCreator have random 
> order.
> This is causing issues when trying to do Reproducible Builds with Maven 
> MNG-6276
> Studying ParallelScatterZipCreator, entries are kept sorted in memory in 
> futures list: instead of writing each full scatter in sequence, iterating 
> over futures should permit to write each zip entry in original order, without 
> changing the API or any performance of the gathering process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (MATH-1494) Find exponential curve fit to data using Jacquelin method

2019-08-08 Thread Gilles (JIRA)


[ 
https://issues.apache.org/jira/browse/MATH-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902865#comment-16902865
 ] 

Gilles commented on MATH-1494:
--

{quote}pointers
{quote}
Do you intend to contribute via [GitHub|https://github.com/apache/commons-math]?
 If not, [here is the repository at 
Apache|https://gitbox.apache.org/repos/asf?p=commons-math.git] from which you 
can provide patches (to be attached to this JIRA issue).

Associated functionality is in the ["o.a.c.m.fitting" 
package|https://gitbox.apache.org/repos/asf?p=commons-math.git;a=tree;f=src/main/java/org/apache/commons/math4/fitting]
 but IIUC, the implementation would not need the [boiler-plate code currently 
assumed by the 
design|https://gitbox.apache.org/repos/asf?p=commons-math.git;a=blob;f=src/main/java/org/apache/commons/math4/fitting/AbstractCurveFitter.java].
 That will probably require discussing on the ["dev" 
ML|http://commons.apache.org/mail-lists.html].

Please note that "Commons Math" is being refactored into more manageable 
chunks: namely new modular components on which the next version will depend:
 * [Commons RNG|http://commons.apache.org/proper/commons-rng/]
 * [Commons Numbers|http://commons.apache.org/proper/commons-numbers/]
 * [Commons Geometry|http://commons.apache.org/proper/commons-geometry/]
 * [Commons Statistics|http://commons.apache.org/proper/commons-statistics/]

In particular, we need all the help we can get in order to provide official 
releases for the latter three.

"Commons Numbers" would welcome an additional module for function fitting. That 
would mean porting the code currently in the ["o.a.c.m.fitting" 
package|https://gitbox.apache.org/repos/asf?p=commons-math.git;a=tree;f=src/main/java/org/apache/commons/math4/fitting]
 package (albeit with a new design targeted at the [Java 8 function 
API|https://docs.oracle.com/javase/8/docs/api/java/util/function/DoubleUnaryOperator.html]);
 or at least, design the new module towards that goal.

bq. How do I get this assigned to me?

No problem; consider it assigned to you. ;-)


> Find exponential curve fit to data using Jacquelin method
> -
>
> Key: MATH-1494
> URL: https://issues.apache.org/jira/browse/MATH-1494
> Project: Commons Math
>  Issue Type: New Feature
>Reporter: Tom Prodehl
>Priority: Major
>  Labels: curvefitter, exponential, fitting
> Fix For: 4.0
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Function to fit an exponential decay (negarive beta) or positive beta without 
> initial guessing
>  Based on [https://stackoverflow.com/a/39436209/545346]
>  Original source: Regressions et Equations integrales, Jean Jacquelin
>  [https://www.scribd.com/doc/14674814/Regressions-et-equations-integrales]
> The class will allow for the usual variety of providing inputs of x and y.
> Once computed the instance can be queried for Amplitude, Beta, and Constant, 
> defining a curve for y= Amplitude * exp(Beta * x) + Constant



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (COMPRESS-485) Reproducible Builds: keep entries order when gathering ScatterZipOutputStream content in ParallelScatterZipCreator

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-485?focusedWorklogId=291101&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291101
 ]

ASF GitHub Bot logged work on COMPRESS-485:
---

Author: ASF GitHub Bot
Created on: 08/Aug/19 09:20
Start Date: 08/Aug/19 09:20
Worklog Time Spent: 10m 
  Work Description: bodewig commented on pull request #79: Pull/78  
COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe 
collections 'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291101)
Time Spent: 4h  (was: 3h 50m)

> Reproducible Builds: keep entries order when gathering ScatterZipOutputStream 
> content in ParallelScatterZipCreator
> --
>
> Key: COMPRESS-485
> URL: https://issues.apache.org/jira/browse/COMPRESS-485
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Hervé Boutemy
>Priority: Major
> Fix For: 1.19
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> currently, zip files created using ParallelScatterZipCreator have random 
> order.
> This is causing issues when trying to do Reproducible Builds with Maven 
> MNG-6276
> Studying ParallelScatterZipCreator, entries are kept sorted in memory in 
> futures list: instead of writing each full scatter in sequence, iterating 
> over futures should permit to write each zip entry in original order, without 
> changing the API or any performance of the gathering process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (COMPRESS-485) Reproducible Builds: keep entries order when gathering ScatterZipOutputStream content in ParallelScatterZipCreator

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-485?focusedWorklogId=291100&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291100
 ]

ASF GitHub Bot logged work on COMPRESS-485:
---

Author: ASF GitHub Bot
Created on: 08/Aug/19 09:20
Start Date: 08/Aug/19 09:20
Worklog Time Spent: 10m 
  Work Description: bodewig commented on issue #79: Pull/78  COMPRESS-485 + 
Substituting 'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519440159
 
 
   merged, many thanks again.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291100)
Time Spent: 3h 50m  (was: 3h 40m)

> Reproducible Builds: keep entries order when gathering ScatterZipOutputStream 
> content in ParallelScatterZipCreator
> --
>
> Key: COMPRESS-485
> URL: https://issues.apache.org/jira/browse/COMPRESS-485
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Hervé Boutemy
>Priority: Major
> Fix For: 1.19
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> currently, zip files created using ParallelScatterZipCreator have random 
> order.
> This is causing issues when trying to do Reproducible Builds with Maven 
> MNG-6276
> Studying ParallelScatterZipCreator, entries are kept sorted in memory in 
> futures list: instead of writing each full scatter in sequence, iterating 
> over futures should permit to write each zip entry in original order, without 
> changing the API or any performance of the gathering process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-compress] bodewig closed pull request #79: Pull/78 COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe collections 'ConcurrentLinkedDeque' and iterators.

2019-08-08 Thread GitBox
bodewig closed pull request #79: Pull/78  COMPRESS-485 + Substituting 
'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [commons-compress] bodewig commented on issue #79: Pull/78 COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe collections 'ConcurrentLinkedDeque' and iterators.

2019-08-08 Thread GitBox
bodewig commented on issue #79: Pull/78  COMPRESS-485 + Substituting 
'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519440159
 
 
   merged, many thanks again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-485) Reproducible Builds: keep entries order when gathering ScatterZipOutputStream content in ParallelScatterZipCreator

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-485?focusedWorklogId=291099&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291099
 ]

ASF GitHub Bot logged work on COMPRESS-485:
---

Author: ASF GitHub Bot
Created on: 08/Aug/19 09:18
Start Date: 08/Aug/19 09:18
Worklog Time Spent: 10m 
  Work Description: bodewig commented on issue #79: Pull/78  COMPRESS-485 + 
Substituting 'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519439520
 
 
   oh, you don't need to rebase at all. I just realized all your changes have 
been pat of a distinct commit I could pick out of the PR and apply it 
separately. Sorry.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291099)
Time Spent: 3h 40m  (was: 3.5h)

> Reproducible Builds: keep entries order when gathering ScatterZipOutputStream 
> content in ParallelScatterZipCreator
> --
>
> Key: COMPRESS-485
> URL: https://issues.apache.org/jira/browse/COMPRESS-485
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Hervé Boutemy
>Priority: Major
> Fix For: 1.19
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> currently, zip files created using ParallelScatterZipCreator have random 
> order.
> This is causing issues when trying to do Reproducible Builds with Maven 
> MNG-6276
> Studying ParallelScatterZipCreator, entries are kept sorted in memory in 
> futures list: instead of writing each full scatter in sequence, iterating 
> over futures should permit to write each zip entry in original order, without 
> changing the API or any performance of the gathering process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-compress] bodewig commented on issue #79: Pull/78 COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe collections 'ConcurrentLinkedDeque' and iterators.

2019-08-08 Thread GitBox
bodewig commented on issue #79: Pull/78  COMPRESS-485 + Substituting 
'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519439520
 
 
   oh, you don't need to rebase at all. I just realized all your changes have 
been pat of a distinct commit I could pick out of the PR and apply it 
separately. Sorry.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-485) Reproducible Builds: keep entries order when gathering ScatterZipOutputStream content in ParallelScatterZipCreator

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-485?focusedWorklogId=291094&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291094
 ]

ASF GitHub Bot logged work on COMPRESS-485:
---

Author: ASF GitHub Bot
Created on: 08/Aug/19 09:15
Start Date: 08/Aug/19 09:15
Worklog Time Spent: 10m 
  Work Description: bodewig commented on issue #79: Pull/78  COMPRESS-485 + 
Substituting 'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519438687
 
 
   Thanks @Tibor17 
   
   Actually I've more or less been thinking out loud and not raising issues.
   
   I was trying to figure out whether the `synchronized` usage was giving the 
API users - or our code - any extra guarantees that the non-blocking collection 
code didn't. Keep in mind that I'm not the author of the original code either. 
I am the one who added the `synchronized` around the iteration over `streams` - 
but completely overlooked to the iteration over `futures` before that. Doesn't 
sound as if I was the most qualified person to comment ;-)
   
   What I meant with the first part was that if you added new threads once 
`writeTo` was underway then whatever your new threads contributed would not be 
part of the result - while it now is undefined. Looking at the current code in 
master I see I've been wrong as `writeTo` does quite a few things before 
entering the synchronized block and there is enough leeway anyway.
   
   No, I don't think we need to exclude the methods from each other. The class 
has a very clear usage pattern of two distinct phases:
   
   1. add all the things you want to add
   2. call `writeTo`
   
   and it should be clear that the result of calling `writeTo` before you are 
done with the first phase is a dubious idea. In particular as the javadocs of 
`writeTo` state it will shut down the executor.
   
   I am aware of the iteration guarantees of `ConcurrentLinkedDeque`. Back when 
Kristian added the parallel zip support Commons Compress' baseline has been 
Java 5 (we bumped that to 6 with Compress 1.12 and 7 in Compress 1.13, which is 
where we are today). If it had been Java 7 back then, then I'm sure Kristian 
would have used non-blocking collections instead.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291094)
Time Spent: 3.5h  (was: 3h 20m)

> Reproducible Builds: keep entries order when gathering ScatterZipOutputStream 
> content in ParallelScatterZipCreator
> --
>
> Key: COMPRESS-485
> URL: https://issues.apache.org/jira/browse/COMPRESS-485
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Hervé Boutemy
>Priority: Major
> Fix For: 1.19
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> currently, zip files created using ParallelScatterZipCreator have random 
> order.
> This is causing issues when trying to do Reproducible Builds with Maven 
> MNG-6276
> Studying ParallelScatterZipCreator, entries are kept sorted in memory in 
> futures list: instead of writing each full scatter in sequence, iterating 
> over futures should permit to write each zip entry in original order, without 
> changing the API or any performance of the gathering process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-compress] bodewig commented on issue #79: Pull/78 COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe collections 'ConcurrentLinkedDeque' and iterators.

2019-08-08 Thread GitBox
bodewig commented on issue #79: Pull/78  COMPRESS-485 + Substituting 
'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519438687
 
 
   Thanks @Tibor17 
   
   Actually I've more or less been thinking out loud and not raising issues.
   
   I was trying to figure out whether the `synchronized` usage was giving the 
API users - or our code - any extra guarantees that the non-blocking collection 
code didn't. Keep in mind that I'm not the author of the original code either. 
I am the one who added the `synchronized` around the iteration over `streams` - 
but completely overlooked to the iteration over `futures` before that. Doesn't 
sound as if I was the most qualified person to comment ;-)
   
   What I meant with the first part was that if you added new threads once 
`writeTo` was underway then whatever your new threads contributed would not be 
part of the result - while it now is undefined. Looking at the current code in 
master I see I've been wrong as `writeTo` does quite a few things before 
entering the synchronized block and there is enough leeway anyway.
   
   No, I don't think we need to exclude the methods from each other. The class 
has a very clear usage pattern of two distinct phases:
   
   1. add all the things you want to add
   2. call `writeTo`
   
   and it should be clear that the result of calling `writeTo` before you are 
done with the first phase is a dubious idea. In particular as the javadocs of 
`writeTo` state it will shut down the executor.
   
   I am aware of the iteration guarantees of `ConcurrentLinkedDeque`. Back when 
Kristian added the parallel zip support Commons Compress' baseline has been 
Java 5 (we bumped that to 6 with Compress 1.12 and 7 in Compress 1.13, which is 
where we are today). If it had been Java 7 back then, then I'm sure Kristian 
would have used non-blocking collections instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-485) Reproducible Builds: keep entries order when gathering ScatterZipOutputStream content in ParallelScatterZipCreator

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-485?focusedWorklogId=291070&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291070
 ]

ASF GitHub Bot logged work on COMPRESS-485:
---

Author: ASF GitHub Bot
Created on: 08/Aug/19 08:32
Start Date: 08/Aug/19 08:32
Worklog Time Spent: 10m 
  Work Description: Tibor17 commented on issue #79: Pull/78  COMPRESS-485 + 
Substituting 'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519423686
 
 
   @bodewig 
   You have opened two things, but sorry I have to say, they are not related 
and here is my explanation why:
   1. locking within private/public methods
   2. synchronized iterators
   
   Since now, I am refering to before #78 .
   Regarding (1) you mentioned:
   
   > synchronized would ensure you couldn't spin off new threads and add 
entries to them while writing or closing
   
   The methods (`addArchiveEntry` and `submit`) and `writeTo` were not mutually 
exclusive. 
   If we want to exclusive then we should fire another PR for that.
   
   If you want to prevent `writeTo` from double/parallel call, we can do this
   `if (es.isShutdown()) throw ...Exception`.
   Locking all 3 methods can be done with `AtomicBoolean` but that needs to see 
example.
   
   Regarding (2):
   It is old style to use collections wrapped in `synchronizedList()` and use 
iterators wrapped within `synchronized (streams) {}`. In modern collections 
`ConcurrentLinkedDequeue` is beneficial because it does not require any 
external locking.
   There was also one issue because the variable `futures` was not synchronized 
collection.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291070)
Time Spent: 3h 20m  (was: 3h 10m)

> Reproducible Builds: keep entries order when gathering ScatterZipOutputStream 
> content in ParallelScatterZipCreator
> --
>
> Key: COMPRESS-485
> URL: https://issues.apache.org/jira/browse/COMPRESS-485
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Hervé Boutemy
>Priority: Major
> Fix For: 1.19
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> currently, zip files created using ParallelScatterZipCreator have random 
> order.
> This is causing issues when trying to do Reproducible Builds with Maven 
> MNG-6276
> Studying ParallelScatterZipCreator, entries are kept sorted in memory in 
> futures list: instead of writing each full scatter in sequence, iterating 
> over futures should permit to write each zip entry in original order, without 
> changing the API or any performance of the gathering process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-compress] Tibor17 commented on issue #79: Pull/78 COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe collections 'ConcurrentLinkedDeque' and iterators.

2019-08-08 Thread GitBox
Tibor17 commented on issue #79: Pull/78  COMPRESS-485 + Substituting 
'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519423686
 
 
   @bodewig 
   You have opened two things, but sorry I have to say, they are not related 
and here is my explanation why:
   1. locking within private/public methods
   2. synchronized iterators
   
   Since now, I am refering to before #78 .
   Regarding (1) you mentioned:
   
   > synchronized would ensure you couldn't spin off new threads and add 
entries to them while writing or closing
   
   The methods (`addArchiveEntry` and `submit`) and `writeTo` were not mutually 
exclusive. 
   If we want to exclusive then we should fire another PR for that.
   
   If you want to prevent `writeTo` from double/parallel call, we can do this
   `if (es.isShutdown()) throw ...Exception`.
   Locking all 3 methods can be done with `AtomicBoolean` but that needs to see 
example.
   
   Regarding (2):
   It is old style to use collections wrapped in `synchronizedList()` and use 
iterators wrapped within `synchronized (streams) {}`. In modern collections 
`ConcurrentLinkedDequeue` is beneficial because it does not require any 
external locking.
   There was also one issue because the variable `futures` was not synchronized 
collection.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-485) Reproducible Builds: keep entries order when gathering ScatterZipOutputStream content in ParallelScatterZipCreator

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-485?focusedWorklogId=291048&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-291048
 ]

ASF GitHub Bot logged work on COMPRESS-485:
---

Author: ASF GitHub Bot
Created on: 08/Aug/19 07:36
Start Date: 08/Aug/19 07:36
Worklog Time Spent: 10m 
  Work Description: Tibor17 commented on issue #79: Pull/78  COMPRESS-485 + 
Substituting 'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519183637
 
 
   Hi @bodewig , I have to check the code by myself again and rebase the branch 
since it is 3 months old change.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 291048)
Time Spent: 3h 10m  (was: 3h)

> Reproducible Builds: keep entries order when gathering ScatterZipOutputStream 
> content in ParallelScatterZipCreator
> --
>
> Key: COMPRESS-485
> URL: https://issues.apache.org/jira/browse/COMPRESS-485
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Archivers
>Affects Versions: 1.18
>Reporter: Hervé Boutemy
>Priority: Major
> Fix For: 1.19
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> currently, zip files created using ParallelScatterZipCreator have random 
> order.
> This is causing issues when trying to do Reproducible Builds with Maven 
> MNG-6276
> Studying ParallelScatterZipCreator, entries are kept sorted in memory in 
> futures list: instead of writing each full scatter in sequence, iterating 
> over futures should permit to write each zip entry in original order, without 
> changing the API or any performance of the gathering process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [commons-compress] Tibor17 edited a comment on issue #79: Pull/78 COMPRESS-485 + Substituting 'synchronized' with faster and fully thread-safe collections 'ConcurrentLinkedDeque' and iterator

2019-08-08 Thread GitBox
Tibor17 edited a comment on issue #79: Pull/78  COMPRESS-485 + Substituting 
'synchronized' with faster and fully thread-safe collections 
'ConcurrentLinkedDeque' and iterators.
URL: https://github.com/apache/commons-compress/pull/79#issuecomment-519183637
 
 
   Hi @bodewig , I have to check the code by myself again and rebase the branch 
since it is 3 months old change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services