This is an automated email from the ASF dual-hosted git repository.

bodewig pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/commons-compress.git


The following commit(s) were added to refs/heads/master by this push:
     new 93a35f0  some additional documentation updates
93a35f0 is described below

commit 93a35f0d49e25f4a841c2f5d8d75c4cbcb2552ea
Author: Stefan Bodewig <bode...@apache.org>
AuthorDate: Sun Jul 4 12:36:51 2021 +0200

    some additional documentation updates
---
 src/site/xdoc/examples.xml    | 32 ++++++++++++++++++++++++++------
 src/site/xdoc/index.xml       | 10 ++++++----
 src/site/xdoc/limitations.xml |  6 +++---
 3 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/src/site/xdoc/examples.xml b/src/site/xdoc/examples.xml
index ce7e813..2ceac92 100644
--- a/src/site/xdoc/examples.xml
+++ b/src/site/xdoc/examples.xml
@@ -98,8 +98,9 @@ CompressorInputStream input = new CompressorStreamFactory()
         stream. As of 1.14 this setting only affects decompressing Z,
         XZ and LZMA compressed streams.</p>
         <p>Since Compress 1.19 <code>SevenZFile</code> also has an
-        optional constructor to pass an upper memory limit. Supported
-        are LZMA compressed streams.</p>
+        optional constructor to pass an upper memory limit which is supported
+        are LZMA compressed streams. Since Compress 1.21 this setting
+        also is taken into account when reading the metadata of an archive.</p>
         <p>For the Snappy and LZ4 formats the amount of memory used
         during compression is directly proportional to the window
         size.</p>
@@ -239,7 +240,7 @@ try (InputStream fi = 
Files.newInputStream(Paths.get("my.tar.gz"));
         archive, you can first use <code>createArchiveEntry</code> for
         each file. In general this will set a few flags (usually the
         last modified time, the size and the information whether this
-        is a file or directory) based on the <code>File</code>
+        is a file or directory) based on the <code>File</code> or 
<code>Path</code>
         instance. Alternatively you can create the
         <code>ArchiveEntry</code> subclass corresponding to your
         format directly. Often you may want to set additional flags
@@ -430,6 +431,23 @@ LOOP UNTIL entry.getSize() HAS BEEN READ {
         will likely be significantly slower than sequential
         access.</p>
 
+        <h4><a name="Recovering from Certain Broken 7z 
Archives"></a>Recovering from Certain Broken 7z Archives</h4>
+
+        <p>Starting with Compress 1.19 <code>SevenZFile</code> tries
+        to recover archives that look as if they were part of a
+        multi-volume archive where the first volume has been removed
+        too early.</p>
+
+        <p>Starting with Compress 1.21 this option has to be enabled
+        explicitly in <code>SevenZFileOptions</code>. The way recovery
+        works is by Compress scanning an archive from the end for
+        something that might look like valid 7z metadata and use that,
+        if it can successfully parse the block of data. When doing so
+        Compress may encounter blocks of metadata that look like the
+        metadata of very large archives which in turn may make
+        Compress allocate a lot of memory. Therefore we strongly
+        recommend you also set a memory limit inside the
+        <code>SevenZFileOptions</code> if you enable recovery.</p>
       </subsection>
 
       <subsection name="ar">
@@ -931,7 +949,7 @@ in.close();
         Compress offers two different stream classes for reading or
         writing either format.</p>
 
-        <p>Uncompressing a given frame LZ4 file (you would
+        <p>Uncompressing a given framed LZ4 file (you would
           certainly add exception handling and make sure all streams
           get closed properly):</p>
 <source><![CDATA[
@@ -1014,9 +1032,11 @@ in.close();
         <p>The Pack200 package has a <a href="pack200.html">dedicated
           documentation page</a>.</p>
 
-        <p>The implementation of this package is provided by
+        <p>The implementation of this package used to be provided by
           the <code>java.util.zip</code> package of the Java class
-          library.</p>
+          library. Starting with Compress 1.21 the implementation uses
+          a copy of the pack200 code of the now retired Apache
+          Harmony&#x2122; project that ships with Compress itself.</p>
 
         <p>Uncompressing a given pack200 compressed file (you would
           certainly add exception handling and make sure all streams
diff --git a/src/site/xdoc/index.xml b/src/site/xdoc/index.xml
index ec333a2..0beaa0d 100644
--- a/src/site/xdoc/index.xml
+++ b/src/site/xdoc/index.xml
@@ -47,6 +47,9 @@
                 and
                 the <a href="http://jrpm.sourceforge.net/";>jRPM</a>
                 project.</li>
+              <li>The pack200 code has originally been part of the now
+                retired <a href="https://harmony.apache.org/";>Apache
+                Harmony&#x2122;</a> project.</li>
             </ul>
 
         </section>
@@ -246,22 +249,21 @@
           <p>Currently the bzip2, Pack200, XZ, gzip, lzma, brotli,
             Zstandard and Z formats are
             supported as compressors where gzip support is mostly provided by
-            the <code>java.util.zip</code> package and Pack200 support
-            by the <code>java.util.jar</code> package of the Java
+            the <code>java.util.zip</code> package of the Java
             class library.  XZ and lzma support is provided by the public
             domain <a href="https://tukaani.org/xz/java.html";>XZ for
             Java</a> library.  Brotli support is provided by the MIT
             licensed <a href="https://github.com/google/brotli";>Google
             Brotli decoder</a>. Zstandard support is provided by the BSD
             licensed <a href="https://github.com/luben/zstd-jni";>Zstd-jni</a>.
-            As of Commons Compress 1.20 support for the DEFLATE64, Z and Brotli
+            As of Commons Compress 1.21 support for the DEFLATE64, Z and Brotli
             formats is read-only.</p>
 
           <p>The ar, arj, cpio, dump, tar, 7z and zip formats are supported as
             archivers where the <a href="zip.html">zip</a>
             implementation provides capabilities that go beyond the
             features found in java.util.zip.  As of Commons Compress
-            1.20 support for the dump and arj formats is
+            1.21 support for the dump and arj formats is
             read-only - 7z can read most compressed and encrypted
             archives but only write unencrypted ones.  LZMA(2) support
             in 7z requires <a href="https://tukaani.org/xz/java.html";>XZ for
diff --git a/src/site/xdoc/limitations.xml b/src/site/xdoc/limitations.xml
index e1c62d1..c92a195 100644
--- a/src/site/xdoc/limitations.xml
+++ b/src/site/xdoc/limitations.xml
@@ -37,7 +37,7 @@
          exception with a message like "Illegal seek" we recommend you
          wrap your stream in a <code>SkipShieldingInputStream</code>
          from our utils package before passing it to Compress.</li>
-         <li>Commons Compress cannot be built on JDK 14 or newer.</li>
+         <li>Commons Compress prior to 1.21 cannot be built on JDK 14 or 
newer.</li>
        </ul>
      </section>
 
@@ -168,7 +168,7 @@
          <p>Starting with Commons Compress 1.21 the classlib
          implementation is no longer used at all, instead Commons
          Compress contains the pack200 code of the retired Apache
-         Harmony project.</p></li>
+         Harmony&#x2122; project.</p></li>
        </ul>
      </section>
      <section name="SNAPPY">
@@ -217,7 +217,7 @@
          including the most common STORED and DEFLATEd.  IMPLODE,
          SHRINK, DEFLATE64 and BZIP2 support is read-only.</li>
          <li>no support for encryption</li>
-         <li> or multi-volume archives prior to Compress 1.20</li>
+         <li>no support for multi-volume archives prior to Compress 1.20</li>
          <li>It is currently not possible to write split archives with
          more than 64k segments. When creating split archives with more
          than 100 segments you will need to adjust the file names as

Reply via email to