[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355573=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355573
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 07/Dec/19 04:58
Start Date: 07/Dec/19 04:58
Worklog Time Spent: 10m 
  Work Description: PeterAlfreadLee commented on issue #87: COMPRESS-124 : 
Add support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#issuecomment-562814493
 
 
   > I am wondering if our test cases are enough of a safety net.
   
   I agree with this. I will try to add more test cases.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355573)
Time Spent: 5h  (was: 4h 50m)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355572=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355572
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 07/Dec/19 04:57
Start Date: 07/Dec/19 04:57
Worklog Time Spent: 10m 
  Work Description: PeterAlfreadLee commented on pull request #87: 
COMPRESS-124 : Add support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100439
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -26,14 +26,14 @@
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
-import java.util.HashMap;
-import java.util.Map;
+import java.util.*;
 
 Review comment:
   Agree. Sorry for my careless.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355572)
Time Spent: 4h 50m  (was: 4h 40m)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] PeterAlfreadLee commented on issue #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
PeterAlfreadLee commented on issue #87: COMPRESS-124 : Add support for 
extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#issuecomment-562814493
 
 
   > I am wondering if our test cases are enough of a safety net.
   
   I agree with this. I will try to add more test cases.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355571=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355571
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 07/Dec/19 04:56
Start Date: 07/Dec/19 04:56
Worklog Time Spent: 10m 
  Work Description: PeterAlfreadLee commented on pull request #87: 
COMPRESS-124 : Add support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100382
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   Agree. There're some other keys exist in `TarArchiveEntry`, like this:
   
   
[TarArchiveEntry](https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveEntry.java#L1126-L1129)
   
   I'm wondering if a refactor is needed or not for the existing code.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355571)
Time Spent: 4h 40m  (was: 4.5h)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add 
support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100382
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   Agree. There're some other keys exist in `TarArchiveEntry`, like this:
   
   
https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveEntry.java#L1126
   
   I'm wondering if a refactor is needed or not for the existing code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [commons-compress] PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add 
support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100439
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -26,14 +26,14 @@
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
-import java.util.HashMap;
-import java.util.Map;
+import java.util.*;
 
 Review comment:
   Agree. Sorry for my careless.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355569=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355569
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 07/Dec/19 04:56
Start Date: 07/Dec/19 04:56
Worklog Time Spent: 10m 
  Work Description: PeterAlfreadLee commented on pull request #87: 
COMPRESS-124 : Add support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100382
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   Agree. There're some other keys exist in `TarArchiveEntry`, like this:
   
   
[TarArchiveEntry](https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveEntry.java#L1126-L1129)
   
   I'm wondering if a refactor is needed or not for the existing code.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355569)
Time Spent: 4h 20m  (was: 4h 10m)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add 
support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100382
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   Agree. There're some other keys exist in `TarArchiveEntry`, like this:
   
   
[TarArchiveEntry](https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveEntry.java#L1126-L1129)
   
   I'm wondering if a refactor is needed or not for the existing code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355570=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355570
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 07/Dec/19 04:56
Start Date: 07/Dec/19 04:56
Worklog Time Spent: 10m 
  Work Description: PeterAlfreadLee commented on pull request #87: 
COMPRESS-124 : Add support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100382
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   Agree. There're some other keys exist in `TarArchiveEntry`, like this:
   
   
https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveEntry.java#L1126
   
   I'm wondering if a refactor is needed or not for the existing code.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355570)
Time Spent: 4.5h  (was: 4h 20m)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add 
support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100382
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   Agree. There're some other keys exist in `TarArchiveEntry`, like this:
   
   
[TarArchiveEntry](https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveEntry.java#L1126-L1129)
   
   I'm wondering if a refactor is needed or not for the existing code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355568=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355568
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 07/Dec/19 04:55
Start Date: 07/Dec/19 04:55
Worklog Time Spent: 10m 
  Work Description: PeterAlfreadLee commented on pull request #87: 
COMPRESS-124 : Add support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100382
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   Agree. There're some other keys exist in `TarArchiveEntry`, like this:
   
   
https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveEntry.java#L1126-L1129
   
   I'm wondering if a refactor is needed or not for the existing code.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355568)
Time Spent: 4h 10m  (was: 4h)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
PeterAlfreadLee commented on a change in pull request #87: COMPRESS-124 : Add 
support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r355100382
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   Agree. There're some other keys exist in `TarArchiveEntry`, like this:
   
   
https://github.com/apache/commons-compress/blob/master/src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveEntry.java#L1126-L1129
   
   I'm wondering if a refactor is needed or not for the existing code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (GEOMETRY-69) BSPTreeVisitor stop visit

2019-12-06 Thread Matt Juntunen (Jira)


[ 
https://issues.apache.org/jira/browse/GEOMETRY-69?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990302#comment-16990302
 ] 

Matt Juntunen commented on GEOMETRY-69:
---

Sounds good. The PR is updated.

> BSPTreeVisitor stop visit
> -
>
> Key: GEOMETRY-69
> URL: https://issues.apache.org/jira/browse/GEOMETRY-69
> Project: Apache Commons Geometry
>  Issue Type: Improvement
>Reporter: Matt Juntunen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Update the BSPTreeVisitor interface to allow implementations to stop the node 
> visit process when desired. The current implementation always visits all 
> nodes in the tree. The JDK {{FileVisitor}} interface might be a good 
> reference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CODEC-265) java.lang.NegativeArraySizeException

2019-12-06 Thread Alex Herbert (Jira)


[ 
https://issues.apache.org/jira/browse/CODEC-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990287#comment-16990287
 ] 

Alex Herbert commented on CODEC-265:


{quote}Would that make it possible to base64 files up to some % of the RAM 
available ?
{quote}
When using the static function {{byte[] b = Base64.encode(byte[])}} you can 
encode anything that will fit in the limit of a byte[].

This is approximately 2^31 bytes. Given that you need 4 output characters per 3 
input characters this means you can encode a single array up to 3/4 of 2GB.

If you want to encode larger than that it would not fit in a single array in 
memory. There is no support for splitting arrays into chunks. You can use the 
streaming API to write the encoded bytes to a Stream. You can then use whatever 
stream implementation you want to accept the bytes. This could be infinite if 
the bytes are fed to the streaming API infinitely.

 

>   java.lang.NegativeArraySizeException
> --
>
> Key: CODEC-265
> URL: https://issues.apache.org/jira/browse/CODEC-265
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.13
> Environment: Linux = Ubuntu 18.04.3 LTS
> JDK = 1.8
>  
>Reporter: Ingimar
>Assignee: Alex Herbert
>Priority: Critical
> Fix For: 1.14
>
> Attachments: NewClientEncodePost.java, Util.java, pom.xml
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hi,
> trying to encode a file that is 1GB of size.
> ( linux :
> {code:java}
> fallocate -l 1GB 1gb.zip{code}
> )
> I want to post that file to a RESTful-service, package in JSON.
> *here is the code* 
>  
>  
> {code:java}
> String filePath = "/tmp/1gb.zip";
> System.out.println("\t Post to  : ".concat(URL));
>  System.out.println("\t file : ".concat(filePath));
> Path path = Paths.get(filePath);
>  byte[] bArray = Files.readAllBytes(path);
> // testing commons codec 1.16 (2019-11-05)
>  byte[] encodeBase64 = Base64.encodeBase64(bArray);
> final String contentToBeSaved = new String(encodeBase64);
> HttpClient client = HttpClientBuilder.create().build();
>  HttpResponse response = null;
> JSONObject metadata = new JSONObject();
>  metadata.put("owner", "Ingo");
>  metadata.put("access", "public");
>  metadata.put("licenseType", "CC BY");
>  metadata.put("fileName", "fileName");
>  metadata.put("fileDataBase64", contentToBeSaved);
> String metadataFormatted = 
> StringEscapeUtils.unescapeJavaScript(metadata.toString());
> StringEntity entity = new StringEntity(metadataFormatted, 
> ContentType.APPLICATION_JSON);
> HttpPost post = new HttpPost(URL);
>  post.setEntity(entity);
>  response = client.execute(post);
>  HttpEntity responseEntity = response.getEntity();
> String responseFromMediaserver = EntityUtils.toString(responseEntity, 
> "UTF-8");
>  System.out.println("\n");
>  System.out.println("Response is : " + responseFromMediaserver);
> JSONObject json = new JSONObject(responseFromMediaserver);
>  String uuid = json.getString("uuid");
>  System.out.println("UUID is " + uuid);
> {code}
>  
>  
>  # mvn clean package
>  #   java -Xms512m -Xmx20480m -jar target/mediaClient.jar 
> The crasch is in
>  
> {code:java}
> byte[] encodeBase64 = Base64.encodeBase64(bArray);{code}
>  
> the stacktrace is :
> {code:java}
>  
> Starting NewClientEncodePost
>  Post to : http://127.0.0.1:8080/MediaServerResteasy/media
>  file : /tmp/1gb.zip
> Exception in thread "main" java.lang.NegativeArraySizeException
>  at 
> org.apache.commons.codec.binary.BaseNCodec.resizeBuffer(BaseNCodec.java:253)
>  at 
> org.apache.commons.codec.binary.BaseNCodec.ensureBufferSize(BaseNCodec.java:269)
>  at org.apache.commons.codec.binary.Base64.encode(Base64.java:380)
>  at org.apache.commons.codec.binary.BaseNCodec.encode(BaseNCodec.java:451)
>  at org.apache.commons.codec.binary.BaseNCodec.encode(BaseNCodec.java:430)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:679)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:642)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:623)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:556)
>  at 
> se.nrm.bio.mediaserver.testing.base64.NewClientEncodePost.posting(NewClientEncodePost.java:55)
>  at 
> se.nrm.bio.mediaserver.testing.base64.NewClientEncodePost.main(NewClientEncodePost.java:38)
>  
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-bcel] mernst commented on issue #37: Improve documentation of Pass3bVerifier

2019-12-06 Thread GitBox
mernst commented on issue #37: Improve documentation of Pass3bVerifier
URL: https://github.com/apache/commons-bcel/pull/37#issuecomment-562795956
 
 
   @garydgregory I made the changes you requested, both where you pointed them 
out and elsewhere.
   This is ready for you to take another look.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [commons-imaging] coveralls edited a comment on issue #62: Add disposal method to GIF metadata

2019-12-06 Thread GitBox
coveralls edited a comment on issue #62: Add disposal method to GIF metadata
URL: https://github.com/apache/commons-imaging/pull/62#issuecomment-560113555
 
 
   
   [![Coverage 
Status](https://coveralls.io/builds/27470712/badge)](https://coveralls.io/builds/27470712)
   
   Coverage decreased (-0.01%) to 74.888% when pulling 
**5ba12a3f34e7cd675b871422f825e40c461daac9 on christoffer-rydberg:master** into 
**0a765990af2b1dbcad8c5dd438153be568a190cd on apache:master**.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (LANG-1502) I'd like a StringUtils.startsWithAnyIgnoreCase

2019-12-06 Thread david cogen (Jira)
david cogen created LANG-1502:
-

 Summary: I'd like a StringUtils.startsWithAnyIgnoreCase
 Key: LANG-1502
 URL: https://issues.apache.org/jira/browse/LANG-1502
 Project: Commons Lang
  Issue Type: New Feature
  Components: lang.*
Reporter: david cogen


There is a startsWithAny() and a startsWithIgnoreCase() but no 
startsWithAnyIgnoreCase(). This would be useful - in fact I implemented a 
quick-and-dirty one myself because I needed it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CODEC-265) java.lang.NegativeArraySizeException

2019-12-06 Thread Gary D. Gregory (Jira)


[ 
https://issues.apache.org/jira/browse/CODEC-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989822#comment-16989822
 ] 

Gary D. Gregory commented on CODEC-265:
---

You can pick up SNAPSHOT builds here: 
[https://repository.apache.org/content/repositories/snapshots/commons-codec/commons-codec/1.14-SNAPSHOT/]

 

>   java.lang.NegativeArraySizeException
> --
>
> Key: CODEC-265
> URL: https://issues.apache.org/jira/browse/CODEC-265
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.13
> Environment: Linux = Ubuntu 18.04.3 LTS
> JDK = 1.8
>  
>Reporter: Ingimar
>Assignee: Alex Herbert
>Priority: Critical
> Fix For: 1.14
>
> Attachments: NewClientEncodePost.java, Util.java, pom.xml
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hi,
> trying to encode a file that is 1GB of size.
> ( linux :
> {code:java}
> fallocate -l 1GB 1gb.zip{code}
> )
> I want to post that file to a RESTful-service, package in JSON.
> *here is the code* 
>  
>  
> {code:java}
> String filePath = "/tmp/1gb.zip";
> System.out.println("\t Post to  : ".concat(URL));
>  System.out.println("\t file : ".concat(filePath));
> Path path = Paths.get(filePath);
>  byte[] bArray = Files.readAllBytes(path);
> // testing commons codec 1.16 (2019-11-05)
>  byte[] encodeBase64 = Base64.encodeBase64(bArray);
> final String contentToBeSaved = new String(encodeBase64);
> HttpClient client = HttpClientBuilder.create().build();
>  HttpResponse response = null;
> JSONObject metadata = new JSONObject();
>  metadata.put("owner", "Ingo");
>  metadata.put("access", "public");
>  metadata.put("licenseType", "CC BY");
>  metadata.put("fileName", "fileName");
>  metadata.put("fileDataBase64", contentToBeSaved);
> String metadataFormatted = 
> StringEscapeUtils.unescapeJavaScript(metadata.toString());
> StringEntity entity = new StringEntity(metadataFormatted, 
> ContentType.APPLICATION_JSON);
> HttpPost post = new HttpPost(URL);
>  post.setEntity(entity);
>  response = client.execute(post);
>  HttpEntity responseEntity = response.getEntity();
> String responseFromMediaserver = EntityUtils.toString(responseEntity, 
> "UTF-8");
>  System.out.println("\n");
>  System.out.println("Response is : " + responseFromMediaserver);
> JSONObject json = new JSONObject(responseFromMediaserver);
>  String uuid = json.getString("uuid");
>  System.out.println("UUID is " + uuid);
> {code}
>  
>  
>  # mvn clean package
>  #   java -Xms512m -Xmx20480m -jar target/mediaClient.jar 
> The crasch is in
>  
> {code:java}
> byte[] encodeBase64 = Base64.encodeBase64(bArray);{code}
>  
> the stacktrace is :
> {code:java}
>  
> Starting NewClientEncodePost
>  Post to : http://127.0.0.1:8080/MediaServerResteasy/media
>  file : /tmp/1gb.zip
> Exception in thread "main" java.lang.NegativeArraySizeException
>  at 
> org.apache.commons.codec.binary.BaseNCodec.resizeBuffer(BaseNCodec.java:253)
>  at 
> org.apache.commons.codec.binary.BaseNCodec.ensureBufferSize(BaseNCodec.java:269)
>  at org.apache.commons.codec.binary.Base64.encode(Base64.java:380)
>  at org.apache.commons.codec.binary.BaseNCodec.encode(BaseNCodec.java:451)
>  at org.apache.commons.codec.binary.BaseNCodec.encode(BaseNCodec.java:430)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:679)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:642)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:623)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:556)
>  at 
> se.nrm.bio.mediaserver.testing.base64.NewClientEncodePost.posting(NewClientEncodePost.java:55)
>  at 
> se.nrm.bio.mediaserver.testing.base64.NewClientEncodePost.main(NewClientEncodePost.java:38)
>  
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (VFS-627) SFTP randomly hangs when copying a file on remote server

2019-12-06 Thread James Lentini (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989784#comment-16989784
 ] 

James Lentini commented on VFS-627:
---

Hi [~b.eckenfels], leaving this open makes sense to me. I'll post here when I 
hear back from the JSch project.

> SFTP randomly hangs when copying a file on remote server
> 
>
> Key: VFS-627
> URL: https://issues.apache.org/jira/browse/VFS-627
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.1
> Environment: Java 1.8.0_92
> VFS 2.1
> JSch 0.1.53
>Reporter: Henri Hagberg
>Priority: Major
>
> I have a process where a file is first copied over SFTP to local server and 
> then on the remote server the file is copied to another location on that 
> server for archiving. Both are done using {{FileObject#copyFrom}}. Now I've 
> encountered twice the situation where during archiving (on remote server) the 
> copy action hangs indefinitely (the process was left running for over 24 
> hours). In both cases the issue happened when around 2000 files had been 
> transferred (typical amount is under 100).
> The problem is difficult to reproduce since it doesn't always happen even 
> with large number of files. Based on the stacktrace and random occurrences it 
> might be a concurrency issue. The code using VFS however is single threaded.
> {code}
> Attaching to process ID 128021, please wait...
> Debugger attached successfully.
> Server compiler detected.
> JVM version is 25.92-b14
> Deadlock Detection:
> No deadlocks found.
> Thread 19073: (state = BLOCKED)
> Thread 128165: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=143 (Compiled 
> frame)
>  - org.apache.commons.vfs2.cache.SoftRefFilesCache$SoftRefReleaseThread.run() 
> @bci=26, line=84 (Compiled frame)
> Thread 128164: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.io.PipedInputStream.awaitSpace() @bci=23, line=273 (Compiled frame)
>  - java.io.PipedInputStream.receive(byte[], int, int) @bci=31, line=231 
> (Compiled frame)
>  - java.io.PipedOutputStream.write(byte[], int, int) @bci=77, line=149 
> (Compiled frame)
>  - com.jcraft.jsch.IO.put(byte[], int, int) @bci=7, line=64 (Compiled frame)
>  - com.jcraft.jsch.Channel.write(byte[], int, int) @bci=7, line=438 (Compiled 
> frame)
>  - com.jcraft.jsch.Session.run() @bci=1260, line=1624 (Compiled frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> Thread 128139: (state = BLOCKED)
> Thread 128138: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=143 (Compiled 
> frame)
>  - java.lang.ref.ReferenceQueue.remove() @bci=2, line=164 (Compiled frame)
>  - java.lang.ref.Finalizer$FinalizerThread.run() @bci=36, line=209 
> (Interpreted frame)
> Thread 128137: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.Object.wait() @bci=2, line=502 (Compiled frame)
>  - java.lang.ref.Reference.tryHandlePending(boolean) @bci=54, line=191 
> (Compiled frame)
>  - java.lang.ref.Reference$ReferenceHandler.run() @bci=1, line=153 
> (Interpreted frame)
> Thread 128022: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - com.jcraft.jsch.Session.write(com.jcraft.jsch.Packet, 
> com.jcraft.jsch.Channel, int) @bci=89, line=1261 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp.sendWRITE(byte[], long, byte[], int, int) 
> @bci=191, line=2619 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp.access$100(com.jcraft.jsch.ChannelSftp, 
> byte[], long, byte[], int, int) @bci=9, line=36 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp$1.write(byte[], int, int) @bci=77, line=791 
> (Compiled frame)
>  - java.io.BufferedOutputStream.write(byte[], int, int) @bci=20, line=122 
> (Compiled frame)
>  - org.apache.commons.vfs2.util.MonitorOutputStream.write(byte[], int, int) 
> @bci=8, line=123 (Compiled frame)
>  - java.io.BufferedOutputStream.flushBuffer() @bci=20, line=82 (Compiled 
> frame)
>  - java.io.BufferedOutputStream.write(byte[], int, int) @bci=39, line=126 
> (Compiled frame)
>  - org.apache.commons.vfs2.util.MonitorOutputStream.write(byte[], int, int) 
> @bci=8, line=123 (Compiled frame)
>  - 
> org.apache.commons.vfs2.provider.DefaultFileContent.write(java.io.OutputStream,
>  int) @bci=35, line=892 (Compiled frame)
>  - 
> org.apache.commons.vfs2.provider.DefaultFileContent.write(java.io.OutputStream)
>  @bci=5, line=865 (Compiled frame)
>  - 
> 

[jira] [Commented] (GEOMETRY-69) BSPTreeVisitor stop visit

2019-12-06 Thread Gilles Sadowski (Jira)


[ 
https://issues.apache.org/jira/browse/GEOMETRY-69?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989736#comment-16989736
 ] 

Gilles Sadowski commented on GEOMETRY-69:
-

Givne that they are nested, could we abbreviate the names of the "enum" classes:
* VisitOrder -> Order
* VisitResult -> Result

?

I find it clearer to not
{code}
import 
org.apache.commons.geometry.core.partitioning.bsp.BSPTreeVisitor.VisitOrder;
{code}
but rather
{code}
import org.apache.commons.geometry.core.partitioning.bsp.BSPTreeVisitor;
{code}
(and use the "qualified" name in code).

Could we abbreviate the method names, e.g.
* acceptVisitor -> accept
* ...

?


> BSPTreeVisitor stop visit
> -
>
> Key: GEOMETRY-69
> URL: https://issues.apache.org/jira/browse/GEOMETRY-69
> Project: Apache Commons Geometry
>  Issue Type: Improvement
>Reporter: Matt Juntunen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Update the BSPTreeVisitor interface to allow implementations to stop the node 
> visit process when desired. The current implementation always visits all 
> nodes in the tree. The JDK {{FileVisitor}} interface might be a good 
> reference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (VFS-627) SFTP randomly hangs when copying a file on remote server

2019-12-06 Thread Bernd Eckenfels (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989694#comment-16989694
 ] 

Bernd Eckenfels commented on VFS-627:
-

Thanks james for debugging this. I guess we keep this bug around till we JSch 
has fixed it and we upgraded to that version.

> SFTP randomly hangs when copying a file on remote server
> 
>
> Key: VFS-627
> URL: https://issues.apache.org/jira/browse/VFS-627
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.1
> Environment: Java 1.8.0_92
> VFS 2.1
> JSch 0.1.53
>Reporter: Henri Hagberg
>Priority: Major
>
> I have a process where a file is first copied over SFTP to local server and 
> then on the remote server the file is copied to another location on that 
> server for archiving. Both are done using {{FileObject#copyFrom}}. Now I've 
> encountered twice the situation where during archiving (on remote server) the 
> copy action hangs indefinitely (the process was left running for over 24 
> hours). In both cases the issue happened when around 2000 files had been 
> transferred (typical amount is under 100).
> The problem is difficult to reproduce since it doesn't always happen even 
> with large number of files. Based on the stacktrace and random occurrences it 
> might be a concurrency issue. The code using VFS however is single threaded.
> {code}
> Attaching to process ID 128021, please wait...
> Debugger attached successfully.
> Server compiler detected.
> JVM version is 25.92-b14
> Deadlock Detection:
> No deadlocks found.
> Thread 19073: (state = BLOCKED)
> Thread 128165: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=143 (Compiled 
> frame)
>  - org.apache.commons.vfs2.cache.SoftRefFilesCache$SoftRefReleaseThread.run() 
> @bci=26, line=84 (Compiled frame)
> Thread 128164: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.io.PipedInputStream.awaitSpace() @bci=23, line=273 (Compiled frame)
>  - java.io.PipedInputStream.receive(byte[], int, int) @bci=31, line=231 
> (Compiled frame)
>  - java.io.PipedOutputStream.write(byte[], int, int) @bci=77, line=149 
> (Compiled frame)
>  - com.jcraft.jsch.IO.put(byte[], int, int) @bci=7, line=64 (Compiled frame)
>  - com.jcraft.jsch.Channel.write(byte[], int, int) @bci=7, line=438 (Compiled 
> frame)
>  - com.jcraft.jsch.Session.run() @bci=1260, line=1624 (Compiled frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> Thread 128139: (state = BLOCKED)
> Thread 128138: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=143 (Compiled 
> frame)
>  - java.lang.ref.ReferenceQueue.remove() @bci=2, line=164 (Compiled frame)
>  - java.lang.ref.Finalizer$FinalizerThread.run() @bci=36, line=209 
> (Interpreted frame)
> Thread 128137: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.Object.wait() @bci=2, line=502 (Compiled frame)
>  - java.lang.ref.Reference.tryHandlePending(boolean) @bci=54, line=191 
> (Compiled frame)
>  - java.lang.ref.Reference$ReferenceHandler.run() @bci=1, line=153 
> (Interpreted frame)
> Thread 128022: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - com.jcraft.jsch.Session.write(com.jcraft.jsch.Packet, 
> com.jcraft.jsch.Channel, int) @bci=89, line=1261 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp.sendWRITE(byte[], long, byte[], int, int) 
> @bci=191, line=2619 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp.access$100(com.jcraft.jsch.ChannelSftp, 
> byte[], long, byte[], int, int) @bci=9, line=36 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp$1.write(byte[], int, int) @bci=77, line=791 
> (Compiled frame)
>  - java.io.BufferedOutputStream.write(byte[], int, int) @bci=20, line=122 
> (Compiled frame)
>  - org.apache.commons.vfs2.util.MonitorOutputStream.write(byte[], int, int) 
> @bci=8, line=123 (Compiled frame)
>  - java.io.BufferedOutputStream.flushBuffer() @bci=20, line=82 (Compiled 
> frame)
>  - java.io.BufferedOutputStream.write(byte[], int, int) @bci=39, line=126 
> (Compiled frame)
>  - org.apache.commons.vfs2.util.MonitorOutputStream.write(byte[], int, int) 
> @bci=8, line=123 (Compiled frame)
>  - 
> org.apache.commons.vfs2.provider.DefaultFileContent.write(java.io.OutputStream,
>  int) @bci=35, line=892 (Compiled frame)
>  - 
> org.apache.commons.vfs2.provider.DefaultFileContent.write(java.io.OutputStream)
>  @bci=5, line=865 (Compiled frame)
>  - 
> 

[jira] [Commented] (CODEC-265) java.lang.NegativeArraySizeException

2019-12-06 Thread Ingimar (Jira)


[ 
https://issues.apache.org/jira/browse/CODEC-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989691#comment-16989691
 ] 

Ingimar commented on CODEC-265:
---

Hello,

 

Am I able to test that fix, has there been a new release - if so where can I 
find it ?

regarding -> 'Memory allocation has been added to allow encoding up to the 
array allocation limit.'

Would that make it possible to base64 files up to some % of the RAM available ?

 

best, i

 

>   java.lang.NegativeArraySizeException
> --
>
> Key: CODEC-265
> URL: https://issues.apache.org/jira/browse/CODEC-265
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.13
> Environment: Linux = Ubuntu 18.04.3 LTS
> JDK = 1.8
>  
>Reporter: Ingimar
>Assignee: Alex Herbert
>Priority: Critical
> Fix For: 1.14
>
> Attachments: NewClientEncodePost.java, Util.java, pom.xml
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hi,
> trying to encode a file that is 1GB of size.
> ( linux :
> {code:java}
> fallocate -l 1GB 1gb.zip{code}
> )
> I want to post that file to a RESTful-service, package in JSON.
> *here is the code* 
>  
>  
> {code:java}
> String filePath = "/tmp/1gb.zip";
> System.out.println("\t Post to  : ".concat(URL));
>  System.out.println("\t file : ".concat(filePath));
> Path path = Paths.get(filePath);
>  byte[] bArray = Files.readAllBytes(path);
> // testing commons codec 1.16 (2019-11-05)
>  byte[] encodeBase64 = Base64.encodeBase64(bArray);
> final String contentToBeSaved = new String(encodeBase64);
> HttpClient client = HttpClientBuilder.create().build();
>  HttpResponse response = null;
> JSONObject metadata = new JSONObject();
>  metadata.put("owner", "Ingo");
>  metadata.put("access", "public");
>  metadata.put("licenseType", "CC BY");
>  metadata.put("fileName", "fileName");
>  metadata.put("fileDataBase64", contentToBeSaved);
> String metadataFormatted = 
> StringEscapeUtils.unescapeJavaScript(metadata.toString());
> StringEntity entity = new StringEntity(metadataFormatted, 
> ContentType.APPLICATION_JSON);
> HttpPost post = new HttpPost(URL);
>  post.setEntity(entity);
>  response = client.execute(post);
>  HttpEntity responseEntity = response.getEntity();
> String responseFromMediaserver = EntityUtils.toString(responseEntity, 
> "UTF-8");
>  System.out.println("\n");
>  System.out.println("Response is : " + responseFromMediaserver);
> JSONObject json = new JSONObject(responseFromMediaserver);
>  String uuid = json.getString("uuid");
>  System.out.println("UUID is " + uuid);
> {code}
>  
>  
>  # mvn clean package
>  #   java -Xms512m -Xmx20480m -jar target/mediaClient.jar 
> The crasch is in
>  
> {code:java}
> byte[] encodeBase64 = Base64.encodeBase64(bArray);{code}
>  
> the stacktrace is :
> {code:java}
>  
> Starting NewClientEncodePost
>  Post to : http://127.0.0.1:8080/MediaServerResteasy/media
>  file : /tmp/1gb.zip
> Exception in thread "main" java.lang.NegativeArraySizeException
>  at 
> org.apache.commons.codec.binary.BaseNCodec.resizeBuffer(BaseNCodec.java:253)
>  at 
> org.apache.commons.codec.binary.BaseNCodec.ensureBufferSize(BaseNCodec.java:269)
>  at org.apache.commons.codec.binary.Base64.encode(Base64.java:380)
>  at org.apache.commons.codec.binary.BaseNCodec.encode(BaseNCodec.java:451)
>  at org.apache.commons.codec.binary.BaseNCodec.encode(BaseNCodec.java:430)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:679)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:642)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:623)
>  at org.apache.commons.codec.binary.Base64.encodeBase64(Base64.java:556)
>  at 
> se.nrm.bio.mediaserver.testing.base64.NewClientEncodePost.posting(NewClientEncodePost.java:55)
>  at 
> se.nrm.bio.mediaserver.testing.base64.NewClientEncodePost.main(NewClientEncodePost.java:38)
>  
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (VFS-627) SFTP randomly hangs when copying a file on remote server

2019-12-06 Thread James Lentini (Jira)


[ 
https://issues.apache.org/jira/browse/VFS-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989683#comment-16989683
 ] 

James Lentini commented on VFS-627:
---

I root caused the issue and found a deadlock in JSch. My analysis of the 
problem and a simple test program to reproduce the issue are at: 
[https://github.com/jlentini/SftpCopyDeadlock]
 
I also posted a patch to fix the issue to the jsch-users mailing list in this 
[message|https://sourceforge.net/p/jsch/mailman/message/36872566/].

Note that there is a workaround: use two {{org.apache.commons.vfs2 
FileSystemManager}} instances, one for the source file and one for the 
destination file.

> SFTP randomly hangs when copying a file on remote server
> 
>
> Key: VFS-627
> URL: https://issues.apache.org/jira/browse/VFS-627
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.1
> Environment: Java 1.8.0_92
> VFS 2.1
> JSch 0.1.53
>Reporter: Henri Hagberg
>Priority: Major
>
> I have a process where a file is first copied over SFTP to local server and 
> then on the remote server the file is copied to another location on that 
> server for archiving. Both are done using {{FileObject#copyFrom}}. Now I've 
> encountered twice the situation where during archiving (on remote server) the 
> copy action hangs indefinitely (the process was left running for over 24 
> hours). In both cases the issue happened when around 2000 files had been 
> transferred (typical amount is under 100).
> The problem is difficult to reproduce since it doesn't always happen even 
> with large number of files. Based on the stacktrace and random occurrences it 
> might be a concurrency issue. The code using VFS however is single threaded.
> {code}
> Attaching to process ID 128021, please wait...
> Debugger attached successfully.
> Server compiler detected.
> JVM version is 25.92-b14
> Deadlock Detection:
> No deadlocks found.
> Thread 19073: (state = BLOCKED)
> Thread 128165: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=143 (Compiled 
> frame)
>  - org.apache.commons.vfs2.cache.SoftRefFilesCache$SoftRefReleaseThread.run() 
> @bci=26, line=84 (Compiled frame)
> Thread 128164: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.io.PipedInputStream.awaitSpace() @bci=23, line=273 (Compiled frame)
>  - java.io.PipedInputStream.receive(byte[], int, int) @bci=31, line=231 
> (Compiled frame)
>  - java.io.PipedOutputStream.write(byte[], int, int) @bci=77, line=149 
> (Compiled frame)
>  - com.jcraft.jsch.IO.put(byte[], int, int) @bci=7, line=64 (Compiled frame)
>  - com.jcraft.jsch.Channel.write(byte[], int, int) @bci=7, line=438 (Compiled 
> frame)
>  - com.jcraft.jsch.Session.run() @bci=1260, line=1624 (Compiled frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> Thread 128139: (state = BLOCKED)
> Thread 128138: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=143 (Compiled 
> frame)
>  - java.lang.ref.ReferenceQueue.remove() @bci=2, line=164 (Compiled frame)
>  - java.lang.ref.Finalizer$FinalizerThread.run() @bci=36, line=209 
> (Interpreted frame)
> Thread 128137: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - java.lang.Object.wait() @bci=2, line=502 (Compiled frame)
>  - java.lang.ref.Reference.tryHandlePending(boolean) @bci=54, line=191 
> (Compiled frame)
>  - java.lang.ref.Reference$ReferenceHandler.run() @bci=1, line=153 
> (Interpreted frame)
> Thread 128022: (state = BLOCKED)
>  - java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be 
> imprecise)
>  - com.jcraft.jsch.Session.write(com.jcraft.jsch.Packet, 
> com.jcraft.jsch.Channel, int) @bci=89, line=1261 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp.sendWRITE(byte[], long, byte[], int, int) 
> @bci=191, line=2619 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp.access$100(com.jcraft.jsch.ChannelSftp, 
> byte[], long, byte[], int, int) @bci=9, line=36 (Compiled frame)
>  - com.jcraft.jsch.ChannelSftp$1.write(byte[], int, int) @bci=77, line=791 
> (Compiled frame)
>  - java.io.BufferedOutputStream.write(byte[], int, int) @bci=20, line=122 
> (Compiled frame)
>  - org.apache.commons.vfs2.util.MonitorOutputStream.write(byte[], int, int) 
> @bci=8, line=123 (Compiled frame)
>  - java.io.BufferedOutputStream.flushBuffer() @bci=20, line=82 (Compiled 
> frame)
>  - java.io.BufferedOutputStream.write(byte[], int, int) @bci=39, line=126 
> (Compiled frame)
>  - 

[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355131=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355131
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 06/Dec/19 11:42
Start Date: 06/Dec/19 11:42
Worklog Time Spent: 10m 
  Work Description: tcurdt commented on issue #87: COMPRESS-124 : Add 
support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#issuecomment-562540937
 
 
   I didn't do a thorough code review but from the diffs it looks OKish. Still 
not sure about some of the new classes. Either way - thanks for all the work!
   
   It would probably be good to check this out and look at the final code as 
this is a big change.
   
   I am wondering if our test cases are enough of a safety net.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355131)
Time Spent: 4h  (was: 3h 50m)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] tcurdt commented on issue #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
tcurdt commented on issue #87: COMPRESS-124 : Add support for extracting sparse 
entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#issuecomment-562540937
 
 
   I didn't do a thorough code review but from the diffs it looks OKish. Still 
not sure about some of the new classes. Either way - thanks for all the work!
   
   It would probably be good to check this out and look at the final code as 
this is a big change.
   
   I am wondering if our test cases are enough of a safety net.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355121=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355121
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 06/Dec/19 11:31
Start Date: 06/Dec/19 11:31
Worklog Time Spent: 10m 
  Work Description: tcurdt commented on pull request #87: COMPRESS-124 : 
Add support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r354789566
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -26,14 +26,14 @@
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
-import java.util.HashMap;
-import java.util.Map;
+import java.util.*;
 
 Review comment:
   unless rules have changed - please avoid * imports
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355121)
Time Spent: 3h 50m  (was: 3h 40m)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (COMPRESS-124) Unable to extract a sparse entries from tar archives

2019-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/COMPRESS-124?focusedWorklogId=355120=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-355120
 ]

ASF GitHub Bot logged work on COMPRESS-124:
---

Author: ASF GitHub Bot
Created on: 06/Dec/19 11:31
Start Date: 06/Dec/19 11:31
Worklog Time Spent: 10m 
  Work Description: tcurdt commented on pull request #87: COMPRESS-124 : 
Add support for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r354789475
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   better extract the keys into a static String
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 355120)
Time Spent: 3h 40m  (was: 3.5h)

> Unable to extract a sparse entries from tar archives
> 
>
> Key: COMPRESS-124
> URL: https://issues.apache.org/jira/browse/COMPRESS-124
> Project: Commons Compress
>  Issue Type: New Feature
>  Components: Archivers
>Affects Versions: 1.1, 1.2
> Environment: Platform independent. However, I'm currently using 
> Window 7 Enterprise.
>Reporter: Patrick Dreyer
>Priority: Major
>  Labels: tar
> Fix For: 1.20
>
> Attachments: gnuSparseFile.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Good news first: I already have the patch ready for that.
> I got several TAR files which I could not extract with any of the existing 
> Java implementations, but I could extract all those TAR files successfully 
> with GNU tar.
> It turned out that all the failing TAR files contained so called sparse 
> files. Investigating the source code of all existing Java TAR implementations 
> showed me that none of them even recognizes the existence of GNU sparse 
> entries.
> Actually, I don't need to process one of the contained sparse files and I'm 
> happy if I'm at least able to correctly untar all the non-sparsed files. 
> Thus, it would be sufficient recognizing sparse files without the need to 
> correctly un-sparse them while extracting. As long as all non-sparsed files 
> get extracted correctly, I'm fine.
> The TAR files in question have all been VMware Diagnostic File bundles.
> See 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=653
>  to know how to get them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [commons-compress] tcurdt commented on a change in pull request #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
tcurdt commented on a change in pull request #87: COMPRESS-124 : Add support 
for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r354789566
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -26,14 +26,14 @@
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
-import java.util.HashMap;
-import java.util.Map;
+import java.util.*;
 
 Review comment:
   unless rules have changed - please avoid * imports


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [commons-compress] tcurdt commented on a change in pull request #87: COMPRESS-124 : Add support for extracting sparse entries from tar archives

2019-12-06 Thread GitBox
tcurdt commented on a change in pull request #87: COMPRESS-124 : Add support 
for extracting sparse entries from tar archives
URL: https://github.com/apache/commons-compress/pull/87#discussion_r354789475
 
 

 ##
 File path: 
src/main/java/org/apache/commons/compress/archivers/tar/TarArchiveInputStream.java
 ##
 @@ -516,6 +713,22 @@ private void paxHeaders() throws IOException{
 final String value = new String(rest, 0,
   restLen - 1, 
CharsetNames.UTF_8);
 headers.put(keyword, value);
+
+// for 0.0 PAX Headers
+if(keyword.equals("GNU.sparse.offset")) {
+sparseHeader = new 
TarArchiveStructSparse(Long.parseLong(value), 0);
 
 Review comment:
   better extract the keys into a static String


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (JEXL-307) Variable redeclaration option

2019-12-06 Thread Dmitri Blinov (Jira)


[ 
https://issues.apache.org/jira/browse/JEXL-307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989652#comment-16989652
 ] 

Dmitri Blinov commented on JEXL-307:


Now that everything works as expected I wonder whether it is possible to 
decouple undeclared variable check in lexical mode from lexical mode itself. 
Can we make undeclared variable check a separate JEXL feature? The problem is 
that now, when lexical feature is enabled, once the variable with the name 
'foo' has been declared, there is no way to access context variable with the 
same name, which requires to completely rewrite and check whole script - this 
addes up additional migration pains.

> Variable redeclaration option
> -
>
> Key: JEXL-307
> URL: https://issues.apache.org/jira/browse/JEXL-307
> Project: Commons JEXL
>  Issue Type: New Feature
>Affects Versions: 3.1
>Reporter: Dmitri Blinov
>Assignee: Henri Biestro
>Priority: Minor
> Fix For: 3.2
>
>
> As of now, JEXL allows a script writer to redeclare a local variable during 
> script evaluation.
> {code:java}
> var a = 1; var a = 2;{code}
> This may lead to potential errors with misspelled names and clashed 
> variables. Checking for already defined variable is a common feature of many 
> languages. This feature can be implemented in JEXL as an additional option of 
> JexlFeatures class, enabled by default, thus allowing compatibility with 
> existing code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)