[jira] [Updated] (COMPRESS-552) OSGI check broken - try to load class BundleEvent always fails

2021-03-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/COMPRESS-552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Björn Michael updated COMPRESS-552:
---
Description: 
There is a check for running in OSGI env. in {{ZstdUtils}}, {{BrotliUtils}}, 
{{LZMAUtils}} and {{XZUtils}} like this:
{code:java}
static {
cachedZstdAvailability = CachedAvailability.DONT_CACHE;
try {
Class.forName("org.osgi.framework.BundleEvent");
} catch (final Exception ex) { // NOSONAR
setCacheZstdAvailablity(true);
}
}
{code}
Loading the class {{org.osgi.framework.BundleEvent}} always fails because 
{{Import-Package}} directive for {{org.osgi.framework}} is missing in 
_MANIFEST.MF_. Or it requires another more sophisticated approaches 
([https://stackoverflow.com/q/5879040/1061929]).

Tested with Eclipse 4.14 (org.eclipse.osgi_3.15.100.v20191114-1701.jar)

  was:
There is a check for running in OSGI env. in {{ZstdUtils}}, {{BrotliUtils}}, 
{{LZMAUtils}} and {{XZUtils}} like this:
{code:java}
static {
cachedZstdAvailability = CachedAvailability.DONT_CACHE;
try {
Class.forName("org.osgi.framework.BundleEvent");
} catch (final Exception ex) { // NOSONAR
setCacheZstdAvailablity(true);
}
}
{code}
Loading the class {{org.osgi.framework.BundleEvent}} always fails because 
{{Import-Package}} directive for {{org.osgi.framework}} is missing in 
_MANIFEST.MF_. Or it requires another more sophisticated approach 
(https://stackoverflow.com/q/5879040/1061929).

Tested with Eclipse 4.14 (org.eclipse.osgi_3.15.100.v20191114-1701.jar)


> OSGI check broken - try to load class BundleEvent always fails
> --
>
> Key: COMPRESS-552
> URL: https://issues.apache.org/jira/browse/COMPRESS-552
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.20
>Reporter: Björn Michael
>Priority: Major
>
> There is a check for running in OSGI env. in {{ZstdUtils}}, {{BrotliUtils}}, 
> {{LZMAUtils}} and {{XZUtils}} like this:
> {code:java}
> static {
> cachedZstdAvailability = CachedAvailability.DONT_CACHE;
> try {
> Class.forName("org.osgi.framework.BundleEvent");
> } catch (final Exception ex) { // NOSONAR
> setCacheZstdAvailablity(true);
> }
> }
> {code}
> Loading the class {{org.osgi.framework.BundleEvent}} always fails because 
> {{Import-Package}} directive for {{org.osgi.framework}} is missing in 
> _MANIFEST.MF_. Or it requires another more sophisticated approaches 
> ([https://stackoverflow.com/q/5879040/1061929]).
> Tested with Eclipse 4.14 (org.eclipse.osgi_3.15.100.v20191114-1701.jar)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-552) OSGI check broken - try to load class BundleEvent always fails

2021-03-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/COMPRESS-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296739#comment-17296739
 ] 

Björn Michael commented on COMPRESS-552:


{quote}Could you provide some test to reproduce this problem?
{quote}
I'm afraid that I don't know how to write a narrow-scoped small osgi env. unit 
test.

 
{quote}So the only way out would be a Bundle-Activator?
{quote}
The easiest way would be to add
{code:java}
Import-Package: org.osgi.framework;resolution:=optional
{code}
to {{MANIFEST.MF}}.
 I also like the "classloader implements 
{{org.osgi.framework.BundleReference"}} check 
([https://stackoverflow.com/a/5884211/1061929]).
 But instead of an {{instanceof}} check that requires the class object of 
{{BundleReference}} at compile and runtime, one could compare the classloaders 
interface (and super interface) names to {{org.osgi.framework.BundleReference}} 
by {{java.lang.Class.getName()}}.

> OSGI check broken - try to load class BundleEvent always fails
> --
>
> Key: COMPRESS-552
> URL: https://issues.apache.org/jira/browse/COMPRESS-552
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.20
>Reporter: Björn Michael
>Priority: Major
>
> There is a check for running in OSGI env. in {{ZstdUtils}}, {{BrotliUtils}}, 
> {{LZMAUtils}} and {{XZUtils}} like this:
> {code:java}
> static {
> cachedZstdAvailability = CachedAvailability.DONT_CACHE;
> try {
> Class.forName("org.osgi.framework.BundleEvent");
> } catch (final Exception ex) { // NOSONAR
> setCacheZstdAvailablity(true);
> }
> }
> {code}
> Loading the class {{org.osgi.framework.BundleEvent}} always fails because 
> {{Import-Package}} directive for {{org.osgi.framework}} is missing in 
> _MANIFEST.MF_. Or it requires another more sophisticated approach 
> (https://stackoverflow.com/q/5879040/1061929).
> Tested with Eclipse 4.14 (org.eclipse.osgi_3.15.100.v20191114-1701.jar)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CODEC-296) Add support for Blake3 family of hashes

2021-03-06 Thread Matt Sicker (Jira)


 [ 
https://issues.apache.org/jira/browse/CODEC-296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Sicker updated CODEC-296:
--
Fix Version/s: 1.16

> Add support for Blake3 family of hashes
> ---
>
> Key: CODEC-296
> URL: https://issues.apache.org/jira/browse/CODEC-296
> Project: Commons Codec
>  Issue Type: New Feature
>Reporter: Matt Sicker
>Assignee: Matt Sicker
>Priority: Major
> Fix For: 1.16
>
>
> Brief historical context: the original Blake hash algorithm was a finalist in 
> the SHA-3 competition. The second version, [Blake2|https://www.blake2.net/], 
> has several variants, the most popular seem to be Blake2b and Blake2s, though 
> there are a few others that are tuned and tweaked for different use cases. 
> This brings us to [Blake3|https://github.com/BLAKE3-team/BLAKE3], the third 
> version of the hash algorithm which unifies the variants into a single 
> interface along with further tuned security parameters for increased 
> performance with negligible impact on security.
> Blake3 is extremely versatile and offers interfaces to use it as a message 
> digest (hash) function, pseudorandom function (PRF), a message authentication 
> code (MAC), a key derivation function (KDF), and an extensible output 
> function (XOF). It is also faster than MD5 and SHA-1 (and SHA-2 and SHA-3) 
> while remaining secure which makes it handy for less security-intensive 
> contexts, too, like file hashing.
> While implementing a MessageDigestSpi is fairly straightforward, a MacSpi 
> needs to be loaded from a signed jar which would complicate things a bit 
> here. A generic commons API here might be easiest to deal with (previous idea 
> of SHO API is a higher level feature).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-codec] coveralls commented on pull request #79: CODEC-296: Add support for Blake3 family of hashes

2021-03-06 Thread GitBox


coveralls commented on pull request #79:
URL: https://github.com/apache/commons-codec/pull/79#issuecomment-792140078


   
   [![Coverage 
Status](https://coveralls.io/builds/37712148/badge)](https://coveralls.io/builds/37712148)
   
   Coverage increased (+0.2%) to 94.713% when pulling 
**b71233be7601e9710b6871e8743d34e86561b898 on jvz:blake3** into 
**482df6cabfb288acb6ab3e4a732fdb93aecfa7c2 on apache:master**.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [commons-codec] jvz opened a new pull request #79: CODEC-296: Add support for Blake3 family of hashes

2021-03-06 Thread GitBox


jvz opened a new pull request #79:
URL: https://github.com/apache/commons-codec/pull/79


   https://issues.apache.org/jira/browse/CODEC-296



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (IMAGING-283) Add CIELAB and DIN99 conversion, reduce code duplication, and issues related to zero-division and precision

2021-03-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/IMAGING-283?focusedWorklogId=561888=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-561888
 ]

ASF GitHub Bot logged work on IMAGING-283:
--

Author: ASF GitHub Bot
Created on: 07/Mar/21 00:53
Start Date: 07/Mar/21 00:53
Worklog Time Spent: 10m 
  Work Description: kinow commented on pull request #114:
URL: https://github.com/apache/commons-imaging/pull/114#issuecomment-792135471


   I've also created the JIRA that will be included in our changelog, along 
with the credit to you @Brixomatic . Take a look at the issue & updated pull 
request title, and let me know if there's anything that's missing/incorrect and 
I'll fix it later.
   
   Cheers
   Bruno



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 561888)
Remaining Estimate: 0h
Time Spent: 10m

> Add CIELAB and DIN99 conversion, reduce code duplication, and issues related 
> to zero-division and precision
> ---
>
> Key: IMAGING-283
> URL: https://issues.apache.org/jira/browse/IMAGING-283
> Project: Commons Imaging
>  Issue Type: Improvement
>  Components: imaging.color.*
>Affects Versions: 1.0-alpha2
>Reporter: Bruno P. Kinoshita
>Assignee: Bruno P. Kinoshita
>Priority: Major
> Fix For: 1.0-alpha3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Placeholder for https://github.com/apache/commons-imaging/pull/114



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-imaging] kinow commented on pull request #114: [IMAGING-283] Add CIELAB and DIN99 conversion, reduce code duplication, and issues related to zero-division and precision

2021-03-06 Thread GitBox


kinow commented on pull request #114:
URL: https://github.com/apache/commons-imaging/pull/114#issuecomment-792135471


   I've also created the JIRA that will be included in our changelog, along 
with the credit to you @Brixomatic . Take a look at the issue & updated pull 
request title, and let me know if there's anything that's missing/incorrect and 
I'll fix it later.
   
   Cheers
   Bruno



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (IMAGING-283) Add CIELAB and DIN99 conversion, reduce code duplication, and issues related to zero-division and precision

2021-03-06 Thread Bruno P. Kinoshita (Jira)
Bruno P. Kinoshita created IMAGING-283:
--

 Summary: Add CIELAB and DIN99 conversion, reduce code duplication, 
and issues related to zero-division and precision
 Key: IMAGING-283
 URL: https://issues.apache.org/jira/browse/IMAGING-283
 Project: Commons Imaging
  Issue Type: Improvement
  Components: imaging.color.*
Affects Versions: 1.0-alpha2
Reporter: Bruno P. Kinoshita
Assignee: Bruno P. Kinoshita
 Fix For: 1.0-alpha3


Placeholder for https://github.com/apache/commons-imaging/pull/114



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-imaging] kinow commented on a change in pull request #114: imaging.color extension/fix. More precision, DIN99b + DIN99o added to ColorConversion, division by zero error fixed.

2021-03-06 Thread GitBox


kinow commented on a change in pull request #114:
URL: https://github.com/apache/commons-imaging/pull/114#discussion_r588951722



##
File path: src/main/java/org/apache/commons/imaging/color/ColorConversions.java
##
@@ -747,4 +679,147 @@ public static ColorXyz convertCIELuvtoXYZ(final double L, 
final double u, final
 
 return new ColorXyz(X, Y, Z);
 }
+
+public static ColorDIN99Lab convertCIELabToDIN99bLab(final ColorCieLab 
cie) {
+return convertCIELabToDIN99bLab(cie.L, cie.a, cie.b);
+}
+
+public static ColorDIN99Lab convertCIELabToDIN99bLab(final double L, final 
double a, final double b) {
+final double FAC_1 = 100.0 / Math.log(129.0 / 50.0); // = 105.51
+final double kE = 1.0; // brightness factor, 1.0 for CIE reference 
conditions
+final double kCH = 1.0; // chroma and hue factor, 1.0 for CIE 
reference conditions
+final double ang = Math.toRadians(16.0);
+
+final double L99 = kE * FAC_1 * Math.log(1. + 0.0158 * L);
+double a99 = 0.0;
+double b99 = 0.0;
+if (a != 0.0 || b != 0.0) {
+final double e = a * Math.cos(ang) + b * Math.sin(ang);
+final double f = 0.7 * (b * Math.cos(ang) - a * Math.sin(ang));
+final double G = Math.sqrt(e * e + f * f);
+if (G != 0.) {
+final double k = Math.log(1. + 0.045 * G) / (0.045 * kCH * kE 
* G);
+a99 = k * e;
+b99 = k * f;
+}
+}
+return new ColorDIN99Lab(L99, a99, b99);
+}
+
+public static ColorCieLab convertDIN99bLabToCIELab(final ColorDIN99Lab 
dinb) {
+return convertDIN99bLabToCIELab(dinb.L99, dinb.a99, dinb.b99);
+}
+
+public static ColorCieLab convertDIN99bLabToCIELab(final double L99b, 
final double a99b, final double b99b) {
+final double kE = 1.0; // brightness factor, 1.0 for CIE reference 
conditions
+final double kCH = 1.0; // chroma and hue factor, 1.0 for CIE 
reference conditions
+final double FAC_1 = 100.0 / Math.log(129.0 / 50.0); // L99 scaling 
factor = 105.50867113783109
+final double ang = Math.toRadians(16.0);
+
+final double hef = Math.atan2(b99b, a99b);
+final double C = Math.sqrt(a99b * a99b + b99b * b99b);
+final double G = (Math.exp(0.045 * C * kCH * kE) - 1.0) / 0.045;
+final double e = G * Math.cos(hef);
+final double f = G * Math.sin(hef) / 0.7;
+
+final double L = (Math.exp(L99b * kE / FAC_1) - 1.) / 0.0158;
+final double a = e * Math.cos(ang) - f * Math.sin(ang);
+final double b = e * Math.sin(ang) + f * Math.cos(ang);
+return new ColorCieLab(L, a, b);
+}
+
+/** DIN99o, see: 
https://de.wikipedia.org/w/index.php?title=Diskussion:DIN99-Farbraum */
+public static ColorDIN99Lab convertCIELabToDIN99oLab(final ColorCieLab 
cie) {
+return convertCIELabToDIN99oLab(cie.L, cie.a, cie.b);
+}
+
+/** DIN99o, see: 
https://de.wikipedia.org/w/index.php?title=Diskussion:DIN99-Farbraum */
+public static ColorDIN99Lab convertCIELabToDIN99oLab(final double L, final 
double a, final double b) {
+final double kE = 1.0; // brightness factor, 1.0 for CIE reference 
conditions
+final double kCH = 1.0; // chroma and hue factor, 1.0 for CIE 
reference conditions
+final double FAC_1 = 100.0 / Math.log(139.0 / 100.0); // L99 scaling 
factor = 303.67100547050995
+final double ang = Math.toRadians(26.0);
+
+final double L99o = FAC_1 / kE * Math.log(1 + 0.0039 * L); // 
Lightness correction kE
+double a99o = 0.0;
+double b99o = 0.0;
+if (a != 0.0 || b != 0.0) {
+final double eo = a * Math.cos(ang) + b * Math.sin(ang); // a 
stretching
+final double fo = 0.83 * (b * Math.cos(ang) - a * Math.sin(ang)); 
// b rotation/stretching
+final double Go = Math.sqrt(eo * eo + fo * fo); // chroma
+final double C99o = Math.log(1.0 + 0.075 * Go) / (0.0435 * kCH * 
kE); // factor for chroma compression and viewing conditions
+final double heofo = Math.atan2(fo, eo); // arctan in four 
quadrants
+final double h99o = heofo + ang; // hue rotation
+a99o = C99o * Math.cos(h99o);
+b99o = C99o * Math.sin(h99o);
+}
+return new ColorDIN99Lab(L99o, a99o, b99o);
+}
+
+/** DIN99o, see: 
https://de.wikipedia.org/w/index.php?title=Diskussion:DIN99-Farbraum */
+public static ColorCieLab convertDIN99oLabToCIELab(final ColorDIN99Lab 
dino) {
+return convertDIN99oLabToCIELab(dino.L99, dino.a99, dino.b99);
+}
+
+/** DIN99o, see: 
https://de.wikipedia.org/w/index.php?title=Diskussion:DIN99-Farbraum */
+public static ColorCieLab convertDIN99oLabToCIELab(final double L99o, 
final double a99o, final double b99o) {
+final double kE = 1.0; // brightness factor, 1.0 for CIE reference 
conditions
+  

[GitHub] [commons-beanutils] dependabot[bot] closed pull request #64: Bump japicmp-maven-plugin from 0.15.1 to 0.15.2

2021-03-06 Thread GitBox


dependabot[bot] closed pull request #64:
URL: https://github.com/apache/commons-beanutils/pull/64


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [commons-beanutils] dependabot[bot] commented on pull request #64: Bump japicmp-maven-plugin from 0.15.1 to 0.15.2

2021-03-06 Thread GitBox


dependabot[bot] commented on pull request #64:
URL: https://github.com/apache/commons-beanutils/pull/64#issuecomment-792131274


   Looks like com.github.siom79.japicmp:japicmp-maven-plugin is up-to-date now, 
so this is no longer needed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (CODEC-296) Add support for Blake3 family of hashes

2021-03-06 Thread Matt Sicker (Jira)


 [ 
https://issues.apache.org/jira/browse/CODEC-296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Sicker updated CODEC-296:
--
Description: 
Brief historical context: the original Blake hash algorithm was a finalist in 
the SHA-3 competition. The second version, [Blake2|https://www.blake2.net/], 
has several variants, the most popular seem to be Blake2b and Blake2s, though 
there are a few others that are tuned and tweaked for different use cases. This 
brings us to [Blake3|https://github.com/BLAKE3-team/BLAKE3], the third version 
of the hash algorithm which unifies the variants into a single interface along 
with further tuned security parameters for increased performance with 
negligible impact on security.

Blake3 is extremely versatile and offers interfaces to use it as a message 
digest (hash) function, pseudorandom function (PRF), a message authentication 
code (MAC), a key derivation function (KDF), and an extensible output function 
(XOF). It is also faster than MD5 and SHA-1 (and SHA-2 and SHA-3) while 
remaining secure which makes it handy for less security-intensive contexts, 
too, like file hashing.

While implementing a MessageDigestSpi is fairly straightforward, a MacSpi needs 
to be loaded from a signed jar which would complicate things a bit here. A 
generic commons API here might be easiest to deal with (previous idea of SHO 
API is a higher level feature).

  was:
Brief historical context: the original Blake hash algorithm was a finalist in 
the SHA-3 competition. The second version, [Blake2|https://www.blake2.net/], 
has several variants, the most popular seem to be Blake2b and Blake2s, though 
there are a few others that are tuned and tweaked for different use cases. This 
brings us to [Blake3|https://github.com/BLAKE3-team/BLAKE3], the third version 
of the hash algorithm which unifies the variants into a single interface along 
with further tuned security parameters for increased performance with 
negligible impact on security.

Blake3 is extremely versatile and offers interfaces to use it as a message 
digest (hash) function, pseudorandom function (PRF), a message authentication 
code (MAC), a key derivation function (KDF), and an extensible output function 
(XOF). It is also faster than MD5 and SHA-1 (and SHA-2 and SHA-3) while 
remaining secure which makes it handy for less security-intensive contexts, 
too, like file hashing.

While implementing a MessageDigestSpi is fairly straightforward, a MacSpi needs 
to be loaded from a signed jar which would complicate things a bit here. A more 
appropriate cryptographic API that encompasses all the above functionality 
would be an interface like the [stateful hash object 
(SHO)|https://github.com/noiseprotocol/sho_spec/blob/master/output/sho.pdf] 
from the noise protocol.


> Add support for Blake3 family of hashes
> ---
>
> Key: CODEC-296
> URL: https://issues.apache.org/jira/browse/CODEC-296
> Project: Commons Codec
>  Issue Type: New Feature
>Reporter: Matt Sicker
>Assignee: Matt Sicker
>Priority: Major
>
> Brief historical context: the original Blake hash algorithm was a finalist in 
> the SHA-3 competition. The second version, [Blake2|https://www.blake2.net/], 
> has several variants, the most popular seem to be Blake2b and Blake2s, though 
> there are a few others that are tuned and tweaked for different use cases. 
> This brings us to [Blake3|https://github.com/BLAKE3-team/BLAKE3], the third 
> version of the hash algorithm which unifies the variants into a single 
> interface along with further tuned security parameters for increased 
> performance with negligible impact on security.
> Blake3 is extremely versatile and offers interfaces to use it as a message 
> digest (hash) function, pseudorandom function (PRF), a message authentication 
> code (MAC), a key derivation function (KDF), and an extensible output 
> function (XOF). It is also faster than MD5 and SHA-1 (and SHA-2 and SHA-3) 
> while remaining secure which makes it handy for less security-intensive 
> contexts, too, like file hashing.
> While implementing a MessageDigestSpi is fairly straightforward, a MacSpi 
> needs to be loaded from a signed jar which would complicate things a bit 
> here. A generic commons API here might be easiest to deal with (previous idea 
> of SHO API is a higher level feature).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-06 Thread Fabian Meumertzheim (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296664#comment-17296664
 ] 

Fabian Meumertzheim commented on COMPRESS-569:
--

[~bodewig] Yes, feel free to use the archive and reproducer under any license 
you want. I will let the fuzzer run on the fixed version 
(https://github.com/apache/commons-compress/commit/8543b030e93fa71b6093ac7d4cdb8c4e98bfd63d)
 and report back if I should get any more findings.

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CODEC-296) Add support for Blake3 family of hashes

2021-03-06 Thread Matt Sicker (Jira)


 [ 
https://issues.apache.org/jira/browse/CODEC-296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Sicker updated CODEC-296:
--
Description: 
Brief historical context: the original Blake hash algorithm was a finalist in 
the SHA-3 competition. The second version, [Blake2|https://www.blake2.net/], 
has several variants, the most popular seem to be Blake2b and Blake2s, though 
there are a few others that are tuned and tweaked for different use cases. This 
brings us to [Blake3|https://github.com/BLAKE3-team/BLAKE3], the third version 
of the hash algorithm which unifies the variants into a single interface along 
with further tuned security parameters for increased performance with 
negligible impact on security.

Blake3 is extremely versatile and offers interfaces to use it as a message 
digest (hash) function, pseudorandom function (PRF), a message authentication 
code (MAC), a key derivation function (KDF), and an extensible output function 
(XOF). It is also faster than MD5 and SHA-1 (and SHA-2 and SHA-3) while 
remaining secure which makes it handy for less security-intensive contexts, 
too, like file hashing.

While implementing a MessageDigestSpi is fairly straightforward, a MacSpi needs 
to be loaded from a signed jar which would complicate things a bit here. A more 
appropriate cryptographic API that encompasses all the above functionality 
would be an interface like the [stateful hash object 
(SHO)|https://github.com/noiseprotocol/sho_spec/blob/master/output/sho.pdf] 
from the noise protocol.

  was:
Brief historical context: the original Blake hash algorithm was a finalist in 
the SHA-3 competition. The second version, [Blake2|https://www.blake2.net/], 
has several variants, the most popular seem to be Blake2b and Blake2s, though 
there are a few others that are tuned and tweaked for different use cases. This 
brings us to [Blake3|https://github.com/BLAKE3-team/BLAKE3], the third version 
of the hash algorithm which unifies the variants into a single interface along 
with further tuned security parameters for increased performance with 
negligible impact on security.

Blake3 is extremely versatile and offers interfaces to use it as a message 
digest (hash) function, pseudorandom function (PRF), a message authentication 
code (MAC), a key derivation function (KDF), and an extensible output function 
(XOF). It is also faster than MD5 and SHA-1 (and SHA-2 and SHA-3) while 
remaining secure which makes it handy for less security-intensive contexts, 
too, like file hashing.

I've previously ported the reference public domain Rust implementation to Java 
for another use case, so the algorithm itself is pretty simple to add here. 
This library doesn't define any interfaces for these concerns, though the Java 
crypto API does have some equivalents for most of those use cases, but they 
might be overkill for this.


> Add support for Blake3 family of hashes
> ---
>
> Key: CODEC-296
> URL: https://issues.apache.org/jira/browse/CODEC-296
> Project: Commons Codec
>  Issue Type: New Feature
>Reporter: Matt Sicker
>Assignee: Matt Sicker
>Priority: Major
>
> Brief historical context: the original Blake hash algorithm was a finalist in 
> the SHA-3 competition. The second version, [Blake2|https://www.blake2.net/], 
> has several variants, the most popular seem to be Blake2b and Blake2s, though 
> there are a few others that are tuned and tweaked for different use cases. 
> This brings us to [Blake3|https://github.com/BLAKE3-team/BLAKE3], the third 
> version of the hash algorithm which unifies the variants into a single 
> interface along with further tuned security parameters for increased 
> performance with negligible impact on security.
> Blake3 is extremely versatile and offers interfaces to use it as a message 
> digest (hash) function, pseudorandom function (PRF), a message authentication 
> code (MAC), a key derivation function (KDF), and an extensible output 
> function (XOF). It is also faster than MD5 and SHA-1 (and SHA-2 and SHA-3) 
> while remaining secure which makes it handy for less security-intensive 
> contexts, too, like file hashing.
> While implementing a MessageDigestSpi is fairly straightforward, a MacSpi 
> needs to be loaded from a signed jar which would complicate things a bit 
> here. A more appropriate cryptographic API that encompasses all the above 
> functionality would be an interface like the [stateful hash object 
> (SHO)|https://github.com/noiseprotocol/sho_spec/blob/master/output/sho.pdf] 
> from the noise protocol.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-539) TarArchiveInputStream allocates a lot of memory when iterating through an archive

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296643#comment-17296643
 ] 

Stefan Bodewig commented on COMPRESS-539:
-

[~peterlee] do you still intend to change anything or can we close this?

> TarArchiveInputStream allocates a lot of memory when iterating through an 
> archive
> -
>
> Key: COMPRESS-539
> URL: https://issues.apache.org/jira/browse/COMPRESS-539
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.20
>Reporter: Robin Schimpf
>Assignee: Peter Lee
>Priority: Major
> Attachments: Don't_call_InputStream#skip.patch, 
> Reuse_recordBuffer.patch, image-2020-06-21-10-58-07-917.png, 
> image-2020-06-21-10-58-43-255.png, image-2020-06-21-10-59-10-825.png, 
> image-2020-07-05-22-10-07-402.png, image-2020-07-05-22-11-25-526.png, 
> image-2020-07-05-22-32-15-131.png, image-2020-07-05-22-32-31-511.png
>
>
>  I iterated through the linux source tar and noticed some unneeded 
> allocations happen without extracting any data.
> Reproducing code
> {code:java}
> File tarFile = new File("linux-5.7.1.tar");
> try (TarArchiveInputStream in = new 
> TarArchiveInputStream(Files.newInputStream(tarFile.toPath( {
> TarArchiveEntry entry;
> while ((entry = in.getNextTarEntry()) != null) {
> }
> }
> {code}
> The measurement was done on Java 11.0.7 with the Java Flight Recorder. 
> Options used: 
> -XX:StartFlightRecording=settings=profile,filename=allocations.jfr
> Baseline with the current master implementation:
>  Estimated TLAB allocation: 293MiB
> !image-2020-06-21-10-58-07-917.png!
> 1. IOUtils.skip -> input.skip(numToSkip)
>  This delegates in my test scenario to the InputStream.skip implementation 
> which allocates a new byte[] for every invocation. By simply commenting out 
> the while loop which calls the skip method the estimated TLAB allocation 
> drops to 164MiB (-129MiB).
>  !image-2020-06-21-10-58-43-255.png! 
>  Commenting out the skip call does not seem to be the best solution but it 
> was quick for me to see how much memory can be saved. Also no unit tests 
> where failing for me.
> 2. TarArchiveInputStream.readRecord
>  For every read of the record a new byte[] is created. Since the record size 
> does not change the byte[] can be reused and created when instantiating the 
> TarStream. This optimization is already present in the 
> TarArchiveOutputStream. Reusing the buffer reduces the estimated TLAB 
> allocations further to 128MiB (-36MiB).
>  !image-2020-06-21-10-59-10-825.png!
> I attached the patches I used so the results can be verified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-imaging] kinow commented on a change in pull request #114: imaging.color extension/fix. More precision, DIN99b + DIN99o added to ColorConversion, division by zero error fixed.

2021-03-06 Thread GitBox


kinow commented on a change in pull request #114:
URL: https://github.com/apache/commons-imaging/pull/114#discussion_r588928755



##
File path: 
src/test/java/org/apache/commons/imaging/color/ColorConversionsTest.java
##
@@ -116,4 +117,36 @@ public void testXYZ() {
 Debug.debug("cieluv_xyz", cieluv_xyz);
 }
 }
+
+@Test
+public void testRGBtoDin99b() {
+for (final int rgb : SAMPLE_RGBS) {
+
+   final ColorXyz xyz = ColorConversions.convertRGBtoXYZ(rgb);

Review comment:
   @Brixomatic it looks like there are tabs in this file. When you update 
this PR, could you check if there are any tabs and, please, replace by spaces?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [commons-imaging] kinow commented on a change in pull request #114: imaging.color extension/fix. More precision, DIN99b + DIN99o added to ColorConversion, division by zero error fixed.

2021-03-06 Thread GitBox


kinow commented on a change in pull request #114:
URL: https://github.com/apache/commons-imaging/pull/114#discussion_r588928623



##
File path: src/main/java/org/apache/commons/imaging/color/ColorCieLch.java
##
@@ -19,10 +19,14 @@
 /**
  * Represents a color in the CIELCH color space.
  *
- * Contains the constant values for black, white, red,
- * green, and blue.
+ * 
+ * Contains the constant values for black, white, red,
+ * green, and blue.
+ * 
+ * Changes: H renamed to h.

Review comment:
   Not a problem. But this needs to be removed from this file anyway in the 
meantime :+1: I will create the issue later once I think of a good 
title/description.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (COMPRESS-540) Random access on Tar archive

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296642#comment-17296642
 ] 

Stefan Bodewig commented on COMPRESS-540:
-

I think this is complete by now, isn't it?

> Random access on Tar archive
> 
>
> Key: COMPRESS-540
> URL: https://issues.apache.org/jira/browse/COMPRESS-540
> Project: Commons Compress
>  Issue Type: Improvement
>Reporter: Robin Schimpf
>Priority: Major
> Fix For: 1.21
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> The TarArchiveInputStream only provides sequential access. If only a small 
> amount of files from the archive is needed large amount of data in the input 
> stream needs to be skipped.
> Therefore I was working on a implementation to provide random access to 
> TarFiles equal to the ZipFile api. The basic idea behind the implementation 
> is the following
>  * Random access is backed by a SeekableByteChannel
>  * Read all headers of the tar file and save the place to the data of every 
> header
>  * User can request an input stream for any entry in the archive multiple 
> times



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (COMPRESS-540) Random access on Tar archive

2021-03-06 Thread Stefan Bodewig (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Bodewig updated COMPRESS-540:

Fix Version/s: 1.21

> Random access on Tar archive
> 
>
> Key: COMPRESS-540
> URL: https://issues.apache.org/jira/browse/COMPRESS-540
> Project: Commons Compress
>  Issue Type: Improvement
>Reporter: Robin Schimpf
>Priority: Major
> Fix For: 1.21
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> The TarArchiveInputStream only provides sequential access. If only a small 
> amount of files from the archive is needed large amount of data in the input 
> stream needs to be skipped.
> Therefore I was working on a implementation to provide random access to 
> TarFiles equal to the ZipFile api. The basic idea behind the implementation 
> is the following
>  * Random access is backed by a SeekableByteChannel
>  * Read all headers of the tar file and save the place to the data of every 
> header
>  * User can request an input stream for any entry in the archive multiple 
> times



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-545) Decompression fails with ArrayIndexOutOfBoundsException

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296641#comment-17296641
 ] 

Stefan Bodewig commented on COMPRESS-545:
-

I just confirmed both archives lead to IOExceptions by now.

> Decompression fails with ArrayIndexOutOfBoundsException
> ---
>
> Key: COMPRESS-545
> URL: https://issues.apache.org/jira/browse/COMPRESS-545
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.20
>Reporter: Maksim Zuev
>Priority: Major
> Fix For: 1.21
>
> Attachments: ArrayIndexOutOfBoundsException1.zip, 
> ArrayIndexOutOfBoundsException2.zip
>
>
> This Kotlin code fails with exception(ArrayIndexOutOfBoundsException1.zip is 
> in the attachments)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index 30 
> out of bounds for length 29Exception in thread "main" 
> java.lang.ArrayIndexOutOfBoundsException: Index 30 out of bounds for length 
> 29 at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder$HuffmanCodes.decodeNext(HuffmanDecoder.java:321)
>  at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder$HuffmanCodes.read(HuffmanDecoder.java:307)
>  at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder.decode(HuffmanDecoder.java:152)
>  at 
> org.apache.commons.compress.compressors.deflate64.Deflate64CompressorInputStream.read(Deflate64CompressorInputStream.java:84)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:493)
>  at java.base/java.io.InputStream.readNBytes(InputStream.java:396) at 
> java.base/java.io.InputStream.readAllBytes(InputStream.java:333) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt:86) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt)
> {code:java}
> import org.apache.commons.compress.archivers.ArchiveStreamFactory
> import java.io.ByteArrayInputStream
> import java.io.File
> fun main() {
> val bytes = File("ArrayIndexOutOfBoundsException1.zip").readBytes()
> val input = ByteArrayInputStream(bytes)
> ArchiveStreamFactory().createArchiveInputStream("zip", input).use { ais ->
> ais.nextEntry
> ais.readAllBytes()
> }
> }
> {code}
> Expected some other exception as IOException is the only declared.
>  
> This Kotlin code fails with exception(ArrayIndexOutOfBoundsException2.zip is 
> in the attachments)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index -1 
> out of bounds for length 8192Exception in thread "main" 
> java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds for length 
> 8192 at 
> org.apache.commons.compress.compressors.lzw.LZWInputStream.expandCodeToOutputStack(LZWInputStream.java:232)
>  at 
> org.apache.commons.compress.archivers.zip.UnshrinkingInputStream.decompressNextSymbol(UnshrinkingInputStream.java:124)
>  at 
> org.apache.commons.compress.compressors.lzw.LZWInputStream.read(LZWInputStream.java:80)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:493)
>  at java.base/java.io.InputStream.readNBytes(InputStream.java:396) at 
> java.base/java.io.InputStream.readAllBytes(InputStream.java:333) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt:86) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt)
> {code:java}
> import org.apache.commons.compress.archivers.ArchiveStreamFactory
> import java.io.ByteArrayInputStream
> import java.io.File
> fun main() {
> val bytes = File("ArrayIndexOutOfBoundsException2.zip").readBytes()
> val input = ByteArrayInputStream(bytes)
> ArchiveStreamFactory().createArchiveInputStream("zip", input).use { ais ->
> ais.nextEntry
> ais.readAllBytes()
> }
> }
> {code}
> Expected some other exception as IOException is the only declared.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (COMPRESS-545) Decompression fails with ArrayIndexOutOfBoundsException

2021-03-06 Thread Stefan Bodewig (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Bodewig updated COMPRESS-545:

Fix Version/s: 1.21

> Decompression fails with ArrayIndexOutOfBoundsException
> ---
>
> Key: COMPRESS-545
> URL: https://issues.apache.org/jira/browse/COMPRESS-545
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.20
>Reporter: Maksim Zuev
>Priority: Major
> Fix For: 1.21
>
> Attachments: ArrayIndexOutOfBoundsException1.zip, 
> ArrayIndexOutOfBoundsException2.zip
>
>
> This Kotlin code fails with exception(ArrayIndexOutOfBoundsException1.zip is 
> in the attachments)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index 30 
> out of bounds for length 29Exception in thread "main" 
> java.lang.ArrayIndexOutOfBoundsException: Index 30 out of bounds for length 
> 29 at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder$HuffmanCodes.decodeNext(HuffmanDecoder.java:321)
>  at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder$HuffmanCodes.read(HuffmanDecoder.java:307)
>  at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder.decode(HuffmanDecoder.java:152)
>  at 
> org.apache.commons.compress.compressors.deflate64.Deflate64CompressorInputStream.read(Deflate64CompressorInputStream.java:84)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:493)
>  at java.base/java.io.InputStream.readNBytes(InputStream.java:396) at 
> java.base/java.io.InputStream.readAllBytes(InputStream.java:333) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt:86) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt)
> {code:java}
> import org.apache.commons.compress.archivers.ArchiveStreamFactory
> import java.io.ByteArrayInputStream
> import java.io.File
> fun main() {
> val bytes = File("ArrayIndexOutOfBoundsException1.zip").readBytes()
> val input = ByteArrayInputStream(bytes)
> ArchiveStreamFactory().createArchiveInputStream("zip", input).use { ais ->
> ais.nextEntry
> ais.readAllBytes()
> }
> }
> {code}
> Expected some other exception as IOException is the only declared.
>  
> This Kotlin code fails with exception(ArrayIndexOutOfBoundsException2.zip is 
> in the attachments)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index -1 
> out of bounds for length 8192Exception in thread "main" 
> java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds for length 
> 8192 at 
> org.apache.commons.compress.compressors.lzw.LZWInputStream.expandCodeToOutputStack(LZWInputStream.java:232)
>  at 
> org.apache.commons.compress.archivers.zip.UnshrinkingInputStream.decompressNextSymbol(UnshrinkingInputStream.java:124)
>  at 
> org.apache.commons.compress.compressors.lzw.LZWInputStream.read(LZWInputStream.java:80)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:493)
>  at java.base/java.io.InputStream.readNBytes(InputStream.java:396) at 
> java.base/java.io.InputStream.readAllBytes(InputStream.java:333) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt:86) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt)
> {code:java}
> import org.apache.commons.compress.archivers.ArchiveStreamFactory
> import java.io.ByteArrayInputStream
> import java.io.File
> fun main() {
> val bytes = File("ArrayIndexOutOfBoundsException2.zip").readBytes()
> val input = ByteArrayInputStream(bytes)
> ArchiveStreamFactory().createArchiveInputStream("zip", input).use { ais ->
> ais.nextEntry
> ais.readAllBytes()
> }
> }
> {code}
> Expected some other exception as IOException is the only declared.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (COMPRESS-545) Decompression fails with ArrayIndexOutOfBoundsException

2021-03-06 Thread Stefan Bodewig (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Bodewig resolved COMPRESS-545.
-
Resolution: Fixed

> Decompression fails with ArrayIndexOutOfBoundsException
> ---
>
> Key: COMPRESS-545
> URL: https://issues.apache.org/jira/browse/COMPRESS-545
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.20
>Reporter: Maksim Zuev
>Priority: Major
> Fix For: 1.21
>
> Attachments: ArrayIndexOutOfBoundsException1.zip, 
> ArrayIndexOutOfBoundsException2.zip
>
>
> This Kotlin code fails with exception(ArrayIndexOutOfBoundsException1.zip is 
> in the attachments)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index 30 
> out of bounds for length 29Exception in thread "main" 
> java.lang.ArrayIndexOutOfBoundsException: Index 30 out of bounds for length 
> 29 at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder$HuffmanCodes.decodeNext(HuffmanDecoder.java:321)
>  at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder$HuffmanCodes.read(HuffmanDecoder.java:307)
>  at 
> org.apache.commons.compress.compressors.deflate64.HuffmanDecoder.decode(HuffmanDecoder.java:152)
>  at 
> org.apache.commons.compress.compressors.deflate64.Deflate64CompressorInputStream.read(Deflate64CompressorInputStream.java:84)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:493)
>  at java.base/java.io.InputStream.readNBytes(InputStream.java:396) at 
> java.base/java.io.InputStream.readAllBytes(InputStream.java:333) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt:86) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt)
> {code:java}
> import org.apache.commons.compress.archivers.ArchiveStreamFactory
> import java.io.ByteArrayInputStream
> import java.io.File
> fun main() {
> val bytes = File("ArrayIndexOutOfBoundsException1.zip").readBytes()
> val input = ByteArrayInputStream(bytes)
> ArchiveStreamFactory().createArchiveInputStream("zip", input).use { ais ->
> ais.nextEntry
> ais.readAllBytes()
> }
> }
> {code}
> Expected some other exception as IOException is the only declared.
>  
> This Kotlin code fails with exception(ArrayIndexOutOfBoundsException2.zip is 
> in the attachments)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Index -1 
> out of bounds for length 8192Exception in thread "main" 
> java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds for length 
> 8192 at 
> org.apache.commons.compress.compressors.lzw.LZWInputStream.expandCodeToOutputStack(LZWInputStream.java:232)
>  at 
> org.apache.commons.compress.archivers.zip.UnshrinkingInputStream.decompressNextSymbol(UnshrinkingInputStream.java:124)
>  at 
> org.apache.commons.compress.compressors.lzw.LZWInputStream.read(LZWInputStream.java:80)
>  at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:493)
>  at java.base/java.io.InputStream.readNBytes(InputStream.java:396) at 
> java.base/java.io.InputStream.readAllBytes(InputStream.java:333) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt:86) at 
> kotlinx.fuzzer.tests.apache.zip.ApacheZipTestKt.main(ApacheZipTest.kt)
> {code:java}
> import org.apache.commons.compress.archivers.ArchiveStreamFactory
> import java.io.ByteArrayInputStream
> import java.io.File
> fun main() {
> val bytes = File("ArrayIndexOutOfBoundsException2.zip").readBytes()
> val input = ByteArrayInputStream(bytes)
> ArchiveStreamFactory().createArchiveInputStream("zip", input).use { ais ->
> ais.nextEntry
> ais.readAllBytes()
> }
> }
> {code}
> Expected some other exception as IOException is the only declared.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-549) Inconsistency with latest PKZip standard

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296640#comment-17296640
 ] 

Stefan Bodewig commented on COMPRESS-549:
-

If you encounter any file that combines a data descriptor with the stored 
method, then your only safe way is to use {{ZipFile}} - either the java.util 
variety or ours. Anything using a non-seekable stream can run into situations 
where reading fails.

> Inconsistency with latest PKZip standard
> 
>
> Key: COMPRESS-549
> URL: https://issues.apache.org/jira/browse/COMPRESS-549
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.20
>Reporter: Sebastian Kürten
>Priority: Major
>
> I came across some Zip archives that cannot be read using 
> ZipArchiveInputStream. Investigating the issue, I found that 
> java.util.zip.ZipInputStream from the JDK shows the same behavior while 
> java.util.zip.ZipFile seems to do fine. For java.util.zip.ZipInputStream, the 
> issue has been reported here: 
> [https://bugs.openjdk.java.net/browse/JDK-8143613] and an example file is 
> provided. I copied the testing file into a repository for testing. Here's the 
> example file: 
> [https://github.com/sebkur/test-zip-impls/blob/master/src/test/java/de/topobyte/zip/tests/jdk8143613/TestData.java]
>  and the test that fails reading that data using ZipArchiveInputStream: 
> [https://github.com/sebkur/test-zip-impls/blob/master/src/test/java/de/topobyte/zip/tests/jdk8143613/TestCommonsZipInputStream.java]
>  
>  
> If this file is indeed a ZIP archive consistent with the PKZip spec, I think 
> commons-compress does not work according to the spec. I would appreciate if 
> someone could look into this and verify if this is indeed a bug in 
> commons-compress. Thanks!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-551) GeneralPurposeBit has no compression level information

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296639#comment-17296639
 ] 

Stefan Bodewig commented on COMPRESS-551:
-

[~kaladhar] do you by chance know which level corresponds to which set of bits 
for deflate? Level 6 is normal, 9 probably is maximum compression and 1probably 
 is "super fast", but which one is fast?

> GeneralPurposeBit has no compression level information
> --
>
> Key: COMPRESS-551
> URL: https://issues.apache.org/jira/browse/COMPRESS-551
> Project: Commons Compress
>  Issue Type: Bug
>Reporter: Kaladhar Reddy Mummadi
>Priority: Major
>
> [https://github.com/apache/commons-compress/blob/rel/1.20/src/main/java/org/apache/commons/compress/archivers/zip/GeneralPurposeBit.java]
> When _putArchiveEntry_ is called and _localHeaders_ are generated, 
> *Compression level* information is always *0.* 
> According to section 4.4.4 in 
> [https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT] *Bit 1 and 2* 
> should consists of compression level info. 
> But zips generated has level `0` always even when I change level using 
> *setLevel* API on *ZipArchiveOutputStream*
> Expected behavior is to have level information in local headers of an entry.
>  
> Compressing of data works fine, just that correct level is not baked into 
> localHeader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-552) OSGI check broken - try to load class BundleEvent always fails

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296638#comment-17296638
 ] 

Stefan Bodewig commented on COMPRESS-552:
-

I'm afraid the accepted answer of the StackOverflow question would suffer from 
the same problem ({{FrameworkUtil}} probably isn't available either). So the 
only way out would be a Bundle-Activator?

> OSGI check broken - try to load class BundleEvent always fails
> --
>
> Key: COMPRESS-552
> URL: https://issues.apache.org/jira/browse/COMPRESS-552
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.20
>Reporter: Björn Michael
>Priority: Major
>
> There is a check for running in OSGI env. in {{ZstdUtils}}, {{BrotliUtils}}, 
> {{LZMAUtils}} and {{XZUtils}} like this:
> {code:java}
> static {
> cachedZstdAvailability = CachedAvailability.DONT_CACHE;
> try {
> Class.forName("org.osgi.framework.BundleEvent");
> } catch (final Exception ex) { // NOSONAR
> setCacheZstdAvailablity(true);
> }
> }
> {code}
> Loading the class {{org.osgi.framework.BundleEvent}} always fails because 
> {{Import-Package}} directive for {{org.osgi.framework}} is missing in 
> _MANIFEST.MF_. Or it requires another more sophisticated approach 
> (https://stackoverflow.com/q/5879040/1061929).
> Tested with Eclipse 4.14 (org.eclipse.osgi_3.15.100.v20191114-1701.jar)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-557) Have some kind of reset() in LZ77Compressor

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296634#comment-17296634
 ] 

Stefan Bodewig commented on COMPRESS-557:
-

what would be the desired behavior?

> Have some kind of reset() in LZ77Compressor
> ---
>
> Key: COMPRESS-557
> URL: https://issues.apache.org/jira/browse/COMPRESS-557
> Project: Commons Compress
>  Issue Type: New Feature
>Reporter: Peter Lee
>Priority: Minor
>
> It would be useful if we have a _reset()_ in LZ77Compressor



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (COMPRESS-567) IllegalArgumentException in ZipFile.positionAtCentralDirectory

2021-03-06 Thread Stefan Bodewig (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Bodewig updated COMPRESS-567:

Fix Version/s: 1.21

> IllegalArgumentException in ZipFile.positionAtCentralDirectory
> --
>
> Key: COMPRESS-567
> URL: https://issues.apache.org/jira/browse/COMPRESS-567
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.20
>Reporter: Fabian Meumertzheim
>Priority: Major
> Fix For: 1.21
>
> Attachments: crash.zip
>
>
> The following snippet of code throws an undeclared IllegalArgumentException:
> {code:java}
> byte[] bytes = Base64.getDecoder().decode("UEsFBgAAAQD//1AAJP9QAA==");
> SeekableInMemoryByteChannel input = new SeekableInMemoryByteChannel(bytes);
> try {
> ZipFile file = new ZipFile(input);
> } catch (IOException ignored) {}
> {code}
> The stack trace is:
> {noformat}
> java.lang.IllegalArgumentException: Position has to be in range 0.. 2147483647
>   at 
> org.apache.commons.compress.utils.SeekableInMemoryByteChannel.position(SeekableInMemoryByteChannel.java:94)
>   at 
> org.apache.commons.compress.archivers.zip.ZipFile.positionAtCentralDirectory32(ZipFile.java:1128)
>   at 
> org.apache.commons.compress.archivers.zip.ZipFile.positionAtCentralDirectory(ZipFile.java:1037)
>   at 
> org.apache.commons.compress.archivers.zip.ZipFile.populateFromCentralDirectory(ZipFile.java:702)
>   at 
> org.apache.commons.compress.archivers.zip.ZipFile.(ZipFile.java:371)
>   at 
> org.apache.commons.compress.archivers.zip.ZipFile.(ZipFile.java:318)
>   at 
> org.apache.commons.compress.archivers.zip.ZipFile.(ZipFile.java:274)
> {noformat}
> I also attached the input as a ZIP file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (COMPRESS-565) Regression - Corrupted headers when using 64 bit ZipArchiveOutputStream

2021-03-06 Thread Stefan Bodewig (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Bodewig updated COMPRESS-565:

Fix Version/s: 1.21

> Regression - Corrupted headers when using 64 bit ZipArchiveOutputStream
> ---
>
> Key: COMPRESS-565
> URL: https://issues.apache.org/jira/browse/COMPRESS-565
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers
>Affects Versions: 1.20
>Reporter: Evgenii Bovykin
>Assignee: Peter Lee
>Priority: Major
> Fix For: 1.21
>
> Attachments: commons-compress-1.21-SNAPSHOT.jar, 
> image-2021-02-20-15-51-21-747.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> We've recently updated commons-compress library from version 1.9 to 1.20 and 
> now experiencing the problem that didn't occur before.
>  
> When using ZipArchiveOutputStream to archive 5Gb file and setting the 
> following fields
> {{output.setUseZip64(Zip64Mode.Always)}}
>  
> {{output.setCreateUnicodeExtraFields(ZipArchiveOutputStream.UnicodeExtraFieldPolicy.ALWAYS)}}
> resulting archive contains corrupted headers.
> *Expand-Archive Powershell utility cannot extract the archive at all with the 
> error about corrupted header. 7zip also complains about it, but can extract 
> the archive.*
>  
> The problem didn't appear when using library version 1.9.
>  
> I've created a sample project that reproduces the error - 
> [https://github.com/missingdays/commons-compress-example]
> Issue doesn't reproduce if you do any of the following:
>  
>  # Downgrade library to version 1.9
>  # Remove 
> output.setCreateUnicodeExtraFields(ZipArchiveOutputStream.UnicodeExtraFieldPolicy.ALWAYS)
>  # Remove output.setUseZip64(Zip64Mode.Always) and zip smaller file (e.g. 1Gb)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296633#comment-17296633
 ] 

Stefan Bodewig commented on COMPRESS-569:
-

your test now quickly throws an {{IOException}} with commit 5c5f8a89

Interestingly for dump archives some of our valid test cases seem to contain 
negative sizes, I'll have to look into this more closely.

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-imaging] kinow commented on pull request #114: imaging.color extension/fix. More precision, DIN99b + DIN99o added to ColorConversion, division by zero error fixed.

2021-03-06 Thread GitBox


kinow commented on pull request #114:
URL: https://github.com/apache/commons-imaging/pull/114#issuecomment-792027295


   Sorry for the delay @Brixomatic , I understand the frustration, but there 
are only volunteers reviewing the project, and life sometimes gets in the way 
:-) any time an issue takes too long to be reviewed, merged, or have some 
feedback, just send something like a "ping" message and a committer normally 
checks why it's taking so long. 
   
   Thanks
   Bruno



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [commons-imaging] Brixomatic commented on pull request #114: imaging.color extension/fix. More precision, DIN99b + DIN99o added to ColorConversion, division by zero error fixed.

2021-03-06 Thread GitBox


Brixomatic commented on pull request #114:
URL: https://github.com/apache/commons-imaging/pull/114#issuecomment-792015865


   Last chance to ask for any change. 
   All I can do is offer my contribution, but I won't run after it anymore.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (CODEC-296) Add support for Blake3 family of hashes

2021-03-06 Thread Matt Sicker (Jira)
Matt Sicker created CODEC-296:
-

 Summary: Add support for Blake3 family of hashes
 Key: CODEC-296
 URL: https://issues.apache.org/jira/browse/CODEC-296
 Project: Commons Codec
  Issue Type: New Feature
Reporter: Matt Sicker
Assignee: Matt Sicker


Brief historical context: the original Blake hash algorithm was a finalist in 
the SHA-3 competition. The second version, [Blake2|https://www.blake2.net/], 
has several variants, the most popular seem to be Blake2b and Blake2s, though 
there are a few others that are tuned and tweaked for different use cases. This 
brings us to [Blake3|https://github.com/BLAKE3-team/BLAKE3], the third version 
of the hash algorithm which unifies the variants into a single interface along 
with further tuned security parameters for increased performance with 
negligible impact on security.

Blake3 is extremely versatile and offers interfaces to use it as a message 
digest (hash) function, pseudorandom function (PRF), a message authentication 
code (MAC), a key derivation function (KDF), and an extensible output function 
(XOF). It is also faster than MD5 and SHA-1 (and SHA-2 and SHA-3) while 
remaining secure which makes it handy for less security-intensive contexts, 
too, like file hashing.

I've previously ported the reference public domain Rust implementation to Java 
for another use case, so the algorithm itself is pretty simple to add here. 
This library doesn't define any interfaces for these concerns, though the Java 
crypto API does have some equivalents for most of those use cases, but they 
might be overkill for this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (COMPRESS-566) make gzip deflate buffer size configurable

2021-03-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-566?focusedWorklogId=561827=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-561827
 ]

ASF GitHub Bot logged work on COMPRESS-566:
---

Author: ASF GitHub Bot
Created on: 06/Mar/21 18:28
Start Date: 06/Mar/21 18:28
Worklog Time Spent: 10m 
  Work Description: bokken commented on pull request #168:
URL: https://github.com/apache/commons-compress/pull/168#issuecomment-792008765


   Do I need to do anything else here @garydgregory ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 561827)
Time Spent: 1h 20m  (was: 1h 10m)

> make gzip deflate buffer size configurable
> --
>
> Key: COMPRESS-566
> URL: https://issues.apache.org/jira/browse/COMPRESS-566
> Project: Commons Compress
>  Issue Type: Improvement
>  Components: Compressors
>Reporter: Brett Okken
>Priority: Minor
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The deflateBuffer in GzipCompressorOutputStream is hardcoded to 512.
> It would be good if this could be configurable in GzipParameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] bokken commented on pull request #168: COMPRESS-566 allow gzip buffer size to be configured

2021-03-06 Thread GitBox


bokken commented on pull request #168:
URL: https://github.com/apache/commons-compress/pull/168#issuecomment-792008765


   Do I need to do anything else here @garydgregory ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296614#comment-17296614
 ] 

Stefan Bodewig commented on COMPRESS-569:
-

[~Meumertzheim] can I add your tar archive as a test case to our source tree?

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296604#comment-17296604
 ] 

Stefan Bodewig commented on COMPRESS-569:
-

I see two issues here:

* we should never accept a negative size for a {{TarArchiveEntry}} we read. 
Instead we should throw an exception because of a broken archive. I'll take the 
opportunity to verify we do deal with negative sizes (and potentially other 
numeric values that are supposed to fall into a specific range) for the other 
archive types as well.
* we should check we are not moving backwards in {{TarFile}} - this one can be 
fixed quickly.

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] coveralls commented on pull request #175: Add EOS constant. The end of the stream or the file

2021-03-06 Thread GitBox


coveralls commented on pull request #175:
URL: https://github.com/apache/commons-compress/pull/175#issuecomment-791961042


   
   [![Coverage 
Status](https://coveralls.io/builds/37707676/badge)](https://coveralls.io/builds/37707676)
   
   Coverage remained the same at 87.361% when pulling 
**e8e67c469976a46b44b5680b689f8861da96bb6d on 
arturobernalg:feature/end_of_stream** into 
**46eab6b1e90d9dd6c4f7898f41ff4a05ef68b0da on apache:master**.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [commons-compress] arturobernalg opened a new pull request #175: Add EOS constant. The end of the stream or the file

2021-03-06 Thread GitBox


arturobernalg opened a new pull request #175:
URL: https://github.com/apache/commons-compress/pull/175


   --> The index value when the end of the stream or file has been reached -1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [commons-build-plugin] garydgregory merged pull request #26: Bump spotbugs from 4.2.1 to 4.2.2

2021-03-06 Thread GitBox


garydgregory merged pull request #26:
URL: https://github.com/apache/commons-build-plugin/pull/26


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (TEXT-195) TeamRockie is an online blog that revolves around tech, lifestyle, internet and everything related to it.

2021-03-06 Thread Bruno P. Kinoshita (Jira)


 [ 
https://issues.apache.org/jira/browse/TEXT-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno P. Kinoshita closed TEXT-195.
---
Resolution: Invalid

>  TeamRockie is an online blog that revolves around tech, lifestyle, internet 
> and everything related to it.
> --
>
> Key: TEXT-195
> URL: https://issues.apache.org/jira/browse/TEXT-195
> Project: Commons Text
>  Issue Type: New Feature
>Reporter: Stella Willow
>Priority: Major
>
> I am Stella Willow from Canada. I am working in Team Rockie. It provides 
> people high quality content on different topics. For more information, visit 
> our website... [https://teamrockie.com/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)