Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v6]

2020-10-13 Thread Alan Bateman
On Tue, 13 Oct 2020 19:07:33 GMT, Volker Simonis  wrote:

>> ### Summary
>> 
>> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
>> which can lead to the `ZipException "invalid
>> entry compressed size"`.
>> ### Motivation
>> 
>> In general it is not safe to directly write a ZipEntry obtained from 
>> `ZipInputStream.getNextEntry()`,
>> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
>> `ZipOutputStream.putNextEntry()` to a
>> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
>> and write it to the `ZipOutputStream` as
>> follows:
>>  ZipEntry entry;
>>  ZipInputStream zis = new ZipInputStream(...);
>>  ZipOutputStream zos = new ZipOutputStream(...);
>>  while((entry = zis.getNextEntry()) != null) {
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> The problem with this code is that the zip file format does not record the 
>> compression level used for deflation in its
>> entries. In general, it doesn't even mandate a predefined compression ratio 
>> per compression level. Therefore the
>> compressed size recorded in a `ZipEntry` read from a zip file might differ 
>> from the new compressed size produced by the
>> receiving `ZipOutputStream`. Such a difference will result in a 
>> `ZipException` with the following message:
>>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
>> got 7 bytes)
>>  
>> The correct way of copying all entries from one zip file into another 
>> requires the creation of a new `ZipEntry` or at
>> least resetting of the compressed size field. E.g.:
>>  while((entry = zis.getNextEntry()) != null) {
>>  ZipEntry newEntry = new ZipEntry(entry.getName());
>>  zos.putNextEntry(newEntry);
>>  zis.transferTo(zos);
>>  }
>> or:
>>  while((entry = zis.getNextEntry()) != null) {
>>  entry.setCompressedSize(-1);
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> Unfortunately, there's a lot of user code out there which gets this wrong 
>> and uses the bad coding pattern described
>> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
>> size (expected 12 but got 7 bytes)"` gives
>> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
>> instances of this anti-pattern on GitHub when
>> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
>> wrapper task][1] is affected as well as the
>> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
>> occurrences of this pattern in OpenJDK (see
>> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
>> (e.g.
>> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
>> :).  ### Description  So while this has
>> clearly been a problem before, it apparently wasn't painful enough to 
>> trigger any action from the side of the JDK.
>> However, recently quite some zlib forks with [superior deflate/inflate 
>> performance have evolved][6]. Using them with
>> OpenJDK is quite straight-forward: one just has to configure the alternative 
>> implementations by setting
>> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
>> using these new zlib implementations for
>> selected services in production and the only reason why we haven't enabled 
>> them by default until now is the problem
>> I've just described. The reason why these new libraries uncover the 
>> described anti-pattern much more often is because
>> their compression ratio is slightly different from that of the default zlib 
>> library. This can easily trigger a
>> `ZipException` even if an application is not using a different compression 
>> levels but just a zip file created with
>> another zlib version.  I'd therefore like to propose the following 
>> workaround for the wrong
>> `ZipOutputStream.putNextEntry()` usage in user code:
>> -  ignore the compressed size if it was implicitly determined from the zip 
>> file and not explicitly set by calling
>>`ZipEntry.setCompressedSize()`.
>> 
>> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
>> `JarOutputStream.putNextEntry()` to explain the
>>   problem and why `putNextEntry()` will ignore the compressed size of a 
>> `ZipEntry` if that was set implicitely when
>>   reading that entry from a `ZipFile` or `ZipInputStream`.
>> 
>> 
>> ### Technical Details
>> 
>> A zip file consists of a stream of File Entries followed by a Central 
>> Directory (see [here for a more detailed
>> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
>> followed by the compressed Data and an
>> optional Data Descriptor. The LFH contains the File Name and among other 
>> attributes the Compressed and Uncompressed
>> size and CRC of the Data. In the case where the latter three attributes are 
>> not available at the time when the LFH is
>> created, this fact will be recorded in a flag of the LFH and will trigger 

Re: RFR: 8242068: Signed JAR support for RSASSA-PSS and EdDSA [v7]

2020-10-13 Thread Valerie Peng
On Wed, 14 Oct 2020 03:51:23 GMT, Weijun Wang  wrote:

>> Major points in CSR at https://bugs.openjdk.java.net/browse/JDK-8245274:
>> 
>> - new sigalg "RSASSA-PSS", "EdDSA", "Ed25519" and "Ed448" can be used in 
>> jarsigner
>> 
>> - The ".RSA" and ".EC" block extension types (PKCS #7 SignedData inside a 
>> signed JAR) are reused for new signature
>>   algorithms
>> 
>> - A new JarSigner property "directsign"
>> 
>> - Updating the jarsigner tool doc
>> 
>> Major code changes:
>> 
>> - Always use the signature algorithm directly as 
>> SignerInfo::signatureAlgorithm. We used to use the encryption algorithm
>>   there like RSA, DSA, and EC. Now it's always SHA1withRSA or RSASSA-PSS.
>> 
>> - Move signature related utilities methods from AlgorithmId.java to 
>> SignatureUtil.java
>> 
>> - Add new SignatureUtil methods fromKey() and fromSignature() to simplify 
>> creating Signature and getting its AlgorithmId
>> 
>> - Use the new methods in PKCS10, X509CertImpl, and X509CRLImpl signing
>> 
>> - Add a new (and intuitive, IMHO) PKCS7::generateNewSignedData capable of 
>> all old and new signature algorithms
>> 
>> - Mark all -altsign related code deprecated and they can be removed once 
>> ContentSigner is removed
>
> Weijun Wang has updated the pull request incrementally with one additional 
> commit since the last revision:
> 
>   signing time, jarsigner -directsign, and digest algorithm check

src/java.base/share/classes/sun/security/util/KnownOIDs.java line 147:

> 145: SHAKE128("2.16.840.1.101.3.4.2.11"),
> 146: SHAKE256("2.16.840.1.101.3.4.2.12"),
> 147: SHAKE256_LEN("2.16.840.1.101.3.4.2.18", "SHAKE256-LEN"),

Can we move this down a little? The ordering within the section is based on the 
oid value. It's easier to check for
unlisted/unsupported oid value this way. Why not also add SHAKE128_LEN?

-

PR: https://git.openjdk.java.net/jdk/pull/322


Re: RFR: 8242068: Signed JAR support for RSASSA-PSS and EdDSA [v6]

2020-10-13 Thread Valerie Peng
On Tue, 13 Oct 2020 13:34:27 GMT, Weijun Wang  wrote:

>> Major points in CSR at https://bugs.openjdk.java.net/browse/JDK-8245274:
>> 
>> - new sigalg "RSASSA-PSS", "EdDSA", "Ed25519" and "Ed448" can be used in 
>> jarsigner
>> 
>> - The ".RSA" and ".EC" block extension types (PKCS #7 SignedData inside a 
>> signed JAR) are reused for new signature
>>   algorithms
>> 
>> - A new JarSigner property "directsign"
>> 
>> - Updating the jarsigner tool doc
>> 
>> Major code changes:
>> 
>> - Always use the signature algorithm directly as 
>> SignerInfo::signatureAlgorithm. We used to use the encryption algorithm
>>   there like RSA, DSA, and EC. Now it's always SHA1withRSA or RSASSA-PSS.
>> 
>> - Move signature related utilities methods from AlgorithmId.java to 
>> SignatureUtil.java
>> 
>> - Add new SignatureUtil methods fromKey() and fromSignature() to simplify 
>> creating Signature and getting its AlgorithmId
>> 
>> - Use the new methods in PKCS10, X509CertImpl, and X509CRLImpl signing
>> 
>> - Add a new (and intuitive, IMHO) PKCS7::generateNewSignedData capable of 
>> all old and new signature algorithms
>> 
>> - Mark all -altsign related code deprecated and they can be removed once 
>> ContentSigner is removed
>
> Weijun Wang has refreshed the contents of this pull request, and previous 
> commits have been removed. The incremental
> views will show differences compared to the previous content of the PR.

src/java.base/share/classes/sun/security/x509/AlgorithmId.java line 113:

> 111: }
> 112:
> 113: public AlgorithmId(ObjectIdentifier oid, DerValue params)

Now that this is public, can we add a javadoc spec for it?

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 50:

> 48:
> 49: /**
> 50:  * Convent OID.1.2.3.4 or 1.2.3.4 to the matched stdName.

Convent => Convert? matched => matching?
Or maybe just "Match OID. to the stdName"

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 53:

> 51:  *
> 52:  * @param algName input, could be in any form
> 53:  * @return the original name is it does not look like an OID,

is => if?

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 54:

> 52:  * @param algName input, could be in any form
> 53:  * @return the original name is it does not look like an OID,
> 54:  * the matched name for an OID, or the OID itself

Maybe use "the OID value"? The term "itself" reminds me of the specified 
argument somehow. Just nit.

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 94:

> 92:  * @return an AlgorithmParameterSpec object
> 93:  * @throws ProviderException
> 94:  */

Well, I am a bit unsure about your changes to this method. With your change, it 
returns default parameter spec (instead
of null) when the specified AlgorithmParameters object is null. This may not be 
desirable for all cases? Existing
callers would have to check for (params != null) before calling this method. 
The javadoc description also seems a bit
strange with the to-be-converted AlgorithmParameters object being optional. 
Maybe add a separate method like
`getParamSpecWithDefault` on top of this method or add a separate boolean 
argument `useDefault`?

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 164:

> 162: }
> 163: } else {
> 164: paramSpec = getDefaultAlgorithmParameterSpec(sigName, null);

Same comment here as for the getParamSpec(String, AlgorithmParameters) above. 
Given that there are two methods named
getParamSpec(...), maybe add a boolean argument to indicate whether to return 
default value if the to-be-converted
object is null?

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 290:

> 288:  * that must be initialized with a AlgorithmParameterSpec now.
> 289:  */
> 290: public static AlgorithmParameterSpec 
> getDefaultAlgorithmParameterSpec(

Maybe we can just use "getDefaultParamSpec" to match other methods' name in 
this class. Just a suggestion.

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 511:

> 509: }
> 510:
> 511: // Same values for RSA and DSA

Update the comment with "Return the default message digest algorithm with the 
same security strength as the specified
IFC/FFC key size"?

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 499:

> 497: }
> 498:
> 499: // Values from SP800-57 part 1 rev 4 tables 2 and 3

Update the comment with "Return the default message digest algorithm with the 
same security strength as the specified
EC key size"?

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 367:

> 365:  * @param sigEngine the signature object
> 366:  * @param key the private key
> 367:  * @return the AlgorithmIA, not null

IA => Id?

src/java.base/share/classes/sun/security/util/SignatureUtil.java line 268:

> 266:
> 267: /**
> 268:  * Extracts the 

Re: RFR: 8242068: Signed JAR support for RSASSA-PSS and EdDSA [v7]

2020-10-13 Thread Weijun Wang
> Major points in CSR at https://bugs.openjdk.java.net/browse/JDK-8245274:
> 
> - new sigalg "RSASSA-PSS", "EdDSA", "Ed25519" and "Ed448" can be used in 
> jarsigner
> 
> - The ".RSA" and ".EC" block extension types (PKCS #7 SignedData inside a 
> signed JAR) are reused for new signature
>   algorithms
> 
> - A new JarSigner property "directsign"
> 
> - Updating the jarsigner tool doc
> 
> Major code changes:
> 
> - Always use the signature algorithm directly as 
> SignerInfo::signatureAlgorithm. We used to use the encryption algorithm
>   there like RSA, DSA, and EC. Now it's always SHA1withRSA or RSASSA-PSS.
> 
> - Move signature related utilities methods from AlgorithmId.java to 
> SignatureUtil.java
> 
> - Add new SignatureUtil methods fromKey() and fromSignature() to simplify 
> creating Signature and getting its AlgorithmId
> 
> - Use the new methods in PKCS10, X509CertImpl, and X509CRLImpl signing
> 
> - Add a new (and intuitive, IMHO) PKCS7::generateNewSignedData capable of all 
> old and new signature algorithms
> 
> - Mark all -altsign related code deprecated and they can be removed once 
> ContentSigner is removed

Weijun Wang has updated the pull request incrementally with one additional 
commit since the last revision:

  signing time, jarsigner -directsign, and digest algorithm check

-

Changes:
  - all: https://git.openjdk.java.net/jdk/pull/322/files
  - new: https://git.openjdk.java.net/jdk/pull/322/files/ffaae532..734fd034

Webrevs:
 - full: https://webrevs.openjdk.java.net/?repo=jdk=322=06
 - incr: https://webrevs.openjdk.java.net/?repo=jdk=322=05-06

  Stats: 53 lines in 5 files changed: 42 ins; 0 del; 11 mod
  Patch: https://git.openjdk.java.net/jdk/pull/322.diff
  Fetch: git fetch https://git.openjdk.java.net/jdk pull/322/head:pull/322

PR: https://git.openjdk.java.net/jdk/pull/322


Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v6]

2020-10-13 Thread Lance Andersen
On Tue, 13 Oct 2020 19:07:33 GMT, Volker Simonis  wrote:

>> ### Summary
>> 
>> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
>> which can lead to the `ZipException "invalid
>> entry compressed size"`.
>> ### Motivation
>> 
>> In general it is not safe to directly write a ZipEntry obtained from 
>> `ZipInputStream.getNextEntry()`,
>> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
>> `ZipOutputStream.putNextEntry()` to a
>> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
>> and write it to the `ZipOutputStream` as
>> follows:
>>  ZipEntry entry;
>>  ZipInputStream zis = new ZipInputStream(...);
>>  ZipOutputStream zos = new ZipOutputStream(...);
>>  while((entry = zis.getNextEntry()) != null) {
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> The problem with this code is that the zip file format does not record the 
>> compression level used for deflation in its
>> entries. In general, it doesn't even mandate a predefined compression ratio 
>> per compression level. Therefore the
>> compressed size recorded in a `ZipEntry` read from a zip file might differ 
>> from the new compressed size produced by the
>> receiving `ZipOutputStream`. Such a difference will result in a 
>> `ZipException` with the following message:
>>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
>> got 7 bytes)
>>  
>> The correct way of copying all entries from one zip file into another 
>> requires the creation of a new `ZipEntry` or at
>> least resetting of the compressed size field. E.g.:
>>  while((entry = zis.getNextEntry()) != null) {
>>  ZipEntry newEntry = new ZipEntry(entry.getName());
>>  zos.putNextEntry(newEntry);
>>  zis.transferTo(zos);
>>  }
>> or:
>>  while((entry = zis.getNextEntry()) != null) {
>>  entry.setCompressedSize(-1);
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> Unfortunately, there's a lot of user code out there which gets this wrong 
>> and uses the bad coding pattern described
>> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
>> size (expected 12 but got 7 bytes)"` gives
>> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
>> instances of this anti-pattern on GitHub when
>> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
>> wrapper task][1] is affected as well as the
>> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
>> occurrences of this pattern in OpenJDK (see
>> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
>> (e.g.
>> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
>> :).  ### Description  So while this has
>> clearly been a problem before, it apparently wasn't painful enough to 
>> trigger any action from the side of the JDK.
>> However, recently quite some zlib forks with [superior deflate/inflate 
>> performance have evolved][6]. Using them with
>> OpenJDK is quite straight-forward: one just has to configure the alternative 
>> implementations by setting
>> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
>> using these new zlib implementations for
>> selected services in production and the only reason why we haven't enabled 
>> them by default until now is the problem
>> I've just described. The reason why these new libraries uncover the 
>> described anti-pattern much more often is because
>> their compression ratio is slightly different from that of the default zlib 
>> library. This can easily trigger a
>> `ZipException` even if an application is not using a different compression 
>> levels but just a zip file created with
>> another zlib version.  I'd therefore like to propose the following 
>> workaround for the wrong
>> `ZipOutputStream.putNextEntry()` usage in user code:
>> -  ignore the compressed size if it was implicitly determined from the zip 
>> file and not explicitly set by calling
>>`ZipEntry.setCompressedSize()`.
>> 
>> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
>> `JarOutputStream.putNextEntry()` to explain the
>>   problem and why `putNextEntry()` will ignore the compressed size of a 
>> `ZipEntry` if that was set implicitely when
>>   reading that entry from a `ZipFile` or `ZipInputStream`.
>> 
>> 
>> ### Technical Details
>> 
>> A zip file consists of a stream of File Entries followed by a Central 
>> Directory (see [here for a more detailed
>> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
>> followed by the compressed Data and an
>> optional Data Descriptor. The LFH contains the File Name and among other 
>> attributes the Compressed and Uncompressed
>> size and CRC of the Data. In the case where the latter three attributes are 
>> not available at the time when the LFH is
>> created, this fact will be recorded in a flag of the LFH and will trigger 

Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v6]

2020-10-13 Thread Lance Andersen
On Tue, 13 Oct 2020 19:07:33 GMT, Volker Simonis  wrote:

>> ### Summary
>> 
>> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
>> which can lead to the `ZipException "invalid
>> entry compressed size"`.
>> ### Motivation
>> 
>> In general it is not safe to directly write a ZipEntry obtained from 
>> `ZipInputStream.getNextEntry()`,
>> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
>> `ZipOutputStream.putNextEntry()` to a
>> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
>> and write it to the `ZipOutputStream` as
>> follows:
>>  ZipEntry entry;
>>  ZipInputStream zis = new ZipInputStream(...);
>>  ZipOutputStream zos = new ZipOutputStream(...);
>>  while((entry = zis.getNextEntry()) != null) {
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> The problem with this code is that the zip file format does not record the 
>> compression level used for deflation in its
>> entries. In general, it doesn't even mandate a predefined compression ratio 
>> per compression level. Therefore the
>> compressed size recorded in a `ZipEntry` read from a zip file might differ 
>> from the new compressed size produced by the
>> receiving `ZipOutputStream`. Such a difference will result in a 
>> `ZipException` with the following message:
>>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
>> got 7 bytes)
>>  
>> The correct way of copying all entries from one zip file into another 
>> requires the creation of a new `ZipEntry` or at
>> least resetting of the compressed size field. E.g.:
>>  while((entry = zis.getNextEntry()) != null) {
>>  ZipEntry newEntry = new ZipEntry(entry.getName());
>>  zos.putNextEntry(newEntry);
>>  zis.transferTo(zos);
>>  }
>> or:
>>  while((entry = zis.getNextEntry()) != null) {
>>  entry.setCompressedSize(-1);
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> Unfortunately, there's a lot of user code out there which gets this wrong 
>> and uses the bad coding pattern described
>> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
>> size (expected 12 but got 7 bytes)"` gives
>> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
>> instances of this anti-pattern on GitHub when
>> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
>> wrapper task][1] is affected as well as the
>> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
>> occurrences of this pattern in OpenJDK (see
>> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
>> (e.g.
>> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
>> :).  ### Description  So while this has
>> clearly been a problem before, it apparently wasn't painful enough to 
>> trigger any action from the side of the JDK.
>> However, recently quite some zlib forks with [superior deflate/inflate 
>> performance have evolved][6]. Using them with
>> OpenJDK is quite straight-forward: one just has to configure the alternative 
>> implementations by setting
>> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
>> using these new zlib implementations for
>> selected services in production and the only reason why we haven't enabled 
>> them by default until now is the problem
>> I've just described. The reason why these new libraries uncover the 
>> described anti-pattern much more often is because
>> their compression ratio is slightly different from that of the default zlib 
>> library. This can easily trigger a
>> `ZipException` even if an application is not using a different compression 
>> levels but just a zip file created with
>> another zlib version.  I'd therefore like to propose the following 
>> workaround for the wrong
>> `ZipOutputStream.putNextEntry()` usage in user code:
>> -  ignore the compressed size if it was implicitly determined from the zip 
>> file and not explicitly set by calling
>>`ZipEntry.setCompressedSize()`.
>> 
>> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
>> `JarOutputStream.putNextEntry()` to explain the
>>   problem and why `putNextEntry()` will ignore the compressed size of a 
>> `ZipEntry` if that was set implicitely when
>>   reading that entry from a `ZipFile` or `ZipInputStream`.
>> 
>> 
>> ### Technical Details
>> 
>> A zip file consists of a stream of File Entries followed by a Central 
>> Directory (see [here for a more detailed
>> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
>> followed by the compressed Data and an
>> optional Data Descriptor. The LFH contains the File Name and among other 
>> attributes the Compressed and Uncompressed
>> size and CRC of the Data. In the case where the latter three attributes are 
>> not available at the time when the LFH is
>> created, this fact will be recorded in a flag of the LFH and will trigger 

Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v6]

2020-10-13 Thread Lance Andersen
On Tue, 13 Oct 2020 19:07:33 GMT, Volker Simonis  wrote:

>> ### Summary
>> 
>> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
>> which can lead to the `ZipException "invalid
>> entry compressed size"`.
>> ### Motivation
>> 
>> In general it is not safe to directly write a ZipEntry obtained from 
>> `ZipInputStream.getNextEntry()`,
>> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
>> `ZipOutputStream.putNextEntry()` to a
>> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
>> and write it to the `ZipOutputStream` as
>> follows:
>>  ZipEntry entry;
>>  ZipInputStream zis = new ZipInputStream(...);
>>  ZipOutputStream zos = new ZipOutputStream(...);
>>  while((entry = zis.getNextEntry()) != null) {
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> The problem with this code is that the zip file format does not record the 
>> compression level used for deflation in its
>> entries. In general, it doesn't even mandate a predefined compression ratio 
>> per compression level. Therefore the
>> compressed size recorded in a `ZipEntry` read from a zip file might differ 
>> from the new compressed size produced by the
>> receiving `ZipOutputStream`. Such a difference will result in a 
>> `ZipException` with the following message:
>>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
>> got 7 bytes)
>>  
>> The correct way of copying all entries from one zip file into another 
>> requires the creation of a new `ZipEntry` or at
>> least resetting of the compressed size field. E.g.:
>>  while((entry = zis.getNextEntry()) != null) {
>>  ZipEntry newEntry = new ZipEntry(entry.getName());
>>  zos.putNextEntry(newEntry);
>>  zis.transferTo(zos);
>>  }
>> or:
>>  while((entry = zis.getNextEntry()) != null) {
>>  entry.setCompressedSize(-1);
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> Unfortunately, there's a lot of user code out there which gets this wrong 
>> and uses the bad coding pattern described
>> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
>> size (expected 12 but got 7 bytes)"` gives
>> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
>> instances of this anti-pattern on GitHub when
>> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
>> wrapper task][1] is affected as well as the
>> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
>> occurrences of this pattern in OpenJDK (see
>> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
>> (e.g.
>> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
>> :).  ### Description  So while this has
>> clearly been a problem before, it apparently wasn't painful enough to 
>> trigger any action from the side of the JDK.
>> However, recently quite some zlib forks with [superior deflate/inflate 
>> performance have evolved][6]. Using them with
>> OpenJDK is quite straight-forward: one just has to configure the alternative 
>> implementations by setting
>> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
>> using these new zlib implementations for
>> selected services in production and the only reason why we haven't enabled 
>> them by default until now is the problem
>> I've just described. The reason why these new libraries uncover the 
>> described anti-pattern much more often is because
>> their compression ratio is slightly different from that of the default zlib 
>> library. This can easily trigger a
>> `ZipException` even if an application is not using a different compression 
>> levels but just a zip file created with
>> another zlib version.  I'd therefore like to propose the following 
>> workaround for the wrong
>> `ZipOutputStream.putNextEntry()` usage in user code:
>> -  ignore the compressed size if it was implicitly determined from the zip 
>> file and not explicitly set by calling
>>`ZipEntry.setCompressedSize()`.
>> 
>> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
>> `JarOutputStream.putNextEntry()` to explain the
>>   problem and why `putNextEntry()` will ignore the compressed size of a 
>> `ZipEntry` if that was set implicitely when
>>   reading that entry from a `ZipFile` or `ZipInputStream`.
>> 
>> 
>> ### Technical Details
>> 
>> A zip file consists of a stream of File Entries followed by a Central 
>> Directory (see [here for a more detailed
>> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
>> followed by the compressed Data and an
>> optional Data Descriptor. The LFH contains the File Name and among other 
>> attributes the Compressed and Uncompressed
>> size and CRC of the Data. In the case where the latter three attributes are 
>> not available at the time when the LFH is
>> created, this fact will be recorded in a flag of the LFH and will trigger 

Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v4]

2020-10-13 Thread Volker Simonis
On Tue, 13 Oct 2020 18:50:00 GMT, Lance Andersen  wrote:

>> src/java.base/share/classes/java/util/jar/JarOutputStream.java line 87:
>> 
>>> 85:  * 
>>> 86:  * The current time will be used if the entry has no set 
>>> modification
>>> 87:  * time.
>> 
>> I'm happy with the wording. What you think about put a p tag before "The 
>> default compression method ..." so that it
>> goes into its own paragraph? I'm asking because it's inconsistent to have 
>> the first paragraph include the details on
>> the compression method and the details on the current time pushed into a 
>> second paragraph - if you generate the javadoc
>> then you'll see what I mean.
>
> I think that is a good idea to at the paragraph tag as you suggest

Done

-

PR: https://git.openjdk.java.net/jdk/pull/520


Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v4]

2020-10-13 Thread Volker Simonis
On Tue, 13 Oct 2020 17:16:21 GMT, Lance Andersen  wrote:

>> Volker Simonis has updated the pull request with a new target base due to a 
>> merge or a rebase. The incremental webrev
>> excludes the unrelated changes brought in by the merge/rebase.
>
> test/jdk/java/util/zip/CopyZipFile.java line 149:
> 
>> 147: os = new ByteArrayOutputStream();
>> 148: zos = new ZipOutputStream(os);
>> 149: ZipFile zf = new ZipFile(ZIP_FILE);
> 
> Any reason to not use try with resources here as well (and below)

Done

-

PR: https://git.openjdk.java.net/jdk/pull/520


Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v6]

2020-10-13 Thread Volker Simonis
> ### Summary
> 
> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
> which can lead to the `ZipException "invalid
> entry compressed size"`.
> ### Motivation
> 
> In general it is not safe to directly write a ZipEntry obtained from 
> `ZipInputStream.getNextEntry()`,
> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
> `ZipOutputStream.putNextEntry()` to a
> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
> and write it to the `ZipOutputStream` as
> follows:
>  ZipEntry entry;
>  ZipInputStream zis = new ZipInputStream(...);
>  ZipOutputStream zos = new ZipOutputStream(...);
>  while((entry = zis.getNextEntry()) != null) {
>  zos.putNextEntry(entry);
>  zis.transferTo(zos);
>  }
> The problem with this code is that the zip file format does not record the 
> compression level used for deflation in its
> entries. In general, it doesn't even mandate a predefined compression ratio 
> per compression level. Therefore the
> compressed size recorded in a `ZipEntry` read from a zip file might differ 
> from the new compressed size produced by the
> receiving `ZipOutputStream`. Such a difference will result in a 
> `ZipException` with the following message:
>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
> got 7 bytes)
>  
> The correct way of copying all entries from one zip file into another 
> requires the creation of a new `ZipEntry` or at
> least resetting of the compressed size field. E.g.:
>  while((entry = zis.getNextEntry()) != null) {
>  ZipEntry newEntry = new ZipEntry(entry.getName());
>  zos.putNextEntry(newEntry);
>  zis.transferTo(zos);
>  }
> or:
>  while((entry = zis.getNextEntry()) != null) {
>  entry.setCompressedSize(-1);
>  zos.putNextEntry(entry);
>  zis.transferTo(zos);
>  }
> Unfortunately, there's a lot of user code out there which gets this wrong and 
> uses the bad coding pattern described
> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
> size (expected 12 but got 7 bytes)"` gives
> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
> instances of this anti-pattern on GitHub when
> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
> wrapper task][1] is affected as well as the
> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
> occurrences of this pattern in OpenJDK (see
> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
> (e.g.
> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
> :).  ### Description  So while this has
> clearly been a problem before, it apparently wasn't painful enough to trigger 
> any action from the side of the JDK.
> However, recently quite some zlib forks with [superior deflate/inflate 
> performance have evolved][6]. Using them with
> OpenJDK is quite straight-forward: one just has to configure the alternative 
> implementations by setting
> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
> using these new zlib implementations for
> selected services in production and the only reason why we haven't enabled 
> them by default until now is the problem
> I've just described. The reason why these new libraries uncover the described 
> anti-pattern much more often is because
> their compression ratio is slightly different from that of the default zlib 
> library. This can easily trigger a
> `ZipException` even if an application is not using a different compression 
> levels but just a zip file created with
> another zlib version.  I'd therefore like to propose the following workaround 
> for the wrong
> `ZipOutputStream.putNextEntry()` usage in user code:
> -  ignore the compressed size if it was implicitly determined from the zip 
> file and not explicitly set by calling
>`ZipEntry.setCompressedSize()`.
> 
> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
> `JarOutputStream.putNextEntry()` to explain the
>   problem and why `putNextEntry()` will ignore the compressed size of a 
> `ZipEntry` if that was set implicitely when
>   reading that entry from a `ZipFile` or `ZipInputStream`.
> 
> 
> ### Technical Details
> 
> A zip file consists of a stream of File Entries followed by a Central 
> Directory (see [here for a more detailed
> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
> followed by the compressed Data and an
> optional Data Descriptor. The LFH contains the File Name and among other 
> attributes the Compressed and Uncompressed
> size and CRC of the Data. In the case where the latter three attributes are 
> not available at the time when the LFH is
> created, this fact will be recorded in a flag of the LFH and will trigger the 
> creation of a Data Descriptor with the
> corresponding information right after the Data section. Finally, the Central 
> Directory contains one Central 

Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v5]

2020-10-13 Thread Volker Simonis
> ### Summary
> 
> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
> which can lead to the `ZipException "invalid
> entry compressed size"`.
> ### Motivation
> 
> In general it is not safe to directly write a ZipEntry obtained from 
> `ZipInputStream.getNextEntry()`,
> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
> `ZipOutputStream.putNextEntry()` to a
> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
> and write it to the `ZipOutputStream` as
> follows:
>  ZipEntry entry;
>  ZipInputStream zis = new ZipInputStream(...);
>  ZipOutputStream zos = new ZipOutputStream(...);
>  while((entry = zis.getNextEntry()) != null) {
>  zos.putNextEntry(entry);
>  zis.transferTo(zos);
>  }
> The problem with this code is that the zip file format does not record the 
> compression level used for deflation in its
> entries. In general, it doesn't even mandate a predefined compression ratio 
> per compression level. Therefore the
> compressed size recorded in a `ZipEntry` read from a zip file might differ 
> from the new compressed size produced by the
> receiving `ZipOutputStream`. Such a difference will result in a 
> `ZipException` with the following message:
>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
> got 7 bytes)
>  
> The correct way of copying all entries from one zip file into another 
> requires the creation of a new `ZipEntry` or at
> least resetting of the compressed size field. E.g.:
>  while((entry = zis.getNextEntry()) != null) {
>  ZipEntry newEntry = new ZipEntry(entry.getName());
>  zos.putNextEntry(newEntry);
>  zis.transferTo(zos);
>  }
> or:
>  while((entry = zis.getNextEntry()) != null) {
>  entry.setCompressedSize(-1);
>  zos.putNextEntry(entry);
>  zis.transferTo(zos);
>  }
> Unfortunately, there's a lot of user code out there which gets this wrong and 
> uses the bad coding pattern described
> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
> size (expected 12 but got 7 bytes)"` gives
> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
> instances of this anti-pattern on GitHub when
> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
> wrapper task][1] is affected as well as the
> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
> occurrences of this pattern in OpenJDK (see
> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
> (e.g.
> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
> :).  ### Description  So while this has
> clearly been a problem before, it apparently wasn't painful enough to trigger 
> any action from the side of the JDK.
> However, recently quite some zlib forks with [superior deflate/inflate 
> performance have evolved][6]. Using them with
> OpenJDK is quite straight-forward: one just has to configure the alternative 
> implementations by setting
> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
> using these new zlib implementations for
> selected services in production and the only reason why we haven't enabled 
> them by default until now is the problem
> I've just described. The reason why these new libraries uncover the described 
> anti-pattern much more often is because
> their compression ratio is slightly different from that of the default zlib 
> library. This can easily trigger a
> `ZipException` even if an application is not using a different compression 
> levels but just a zip file created with
> another zlib version.  I'd therefore like to propose the following workaround 
> for the wrong
> `ZipOutputStream.putNextEntry()` usage in user code:
> -  ignore the compressed size if it was implicitly determined from the zip 
> file and not explicitly set by calling
>`ZipEntry.setCompressedSize()`.
> 
> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
> `JarOutputStream.putNextEntry()` to explain the
>   problem and why `putNextEntry()` will ignore the compressed size of a 
> `ZipEntry` if that was set implicitely when
>   reading that entry from a `ZipFile` or `ZipInputStream`.
> 
> 
> ### Technical Details
> 
> A zip file consists of a stream of File Entries followed by a Central 
> Directory (see [here for a more detailed
> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
> followed by the compressed Data and an
> optional Data Descriptor. The LFH contains the File Name and among other 
> attributes the Compressed and Uncompressed
> size and CRC of the Data. In the case where the latter three attributes are 
> not available at the time when the LFH is
> created, this fact will be recorded in a flag of the LFH and will trigger the 
> creation of a Data Descriptor with the
> corresponding information right after the Data section. Finally, the Central 
> Directory contains one Central 

Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v4]

2020-10-13 Thread Alan Bateman
On Tue, 13 Oct 2020 11:40:36 GMT, Volker Simonis  wrote:

>> ### Summary
>> 
>> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
>> which can lead to the `ZipException "invalid
>> entry compressed size"`.
>> ### Motivation
>> 
>> In general it is not safe to directly write a ZipEntry obtained from 
>> `ZipInputStream.getNextEntry()`,
>> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
>> `ZipOutputStream.putNextEntry()` to a
>> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
>> and write it to the `ZipOutputStream` as
>> follows:
>>  ZipEntry entry;
>>  ZipInputStream zis = new ZipInputStream(...);
>>  ZipOutputStream zos = new ZipOutputStream(...);
>>  while((entry = zis.getNextEntry()) != null) {
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> The problem with this code is that the zip file format does not record the 
>> compression level used for deflation in its
>> entries. In general, it doesn't even mandate a predefined compression ratio 
>> per compression level. Therefore the
>> compressed size recorded in a `ZipEntry` read from a zip file might differ 
>> from the new compressed size produced by the
>> receiving `ZipOutputStream`. Such a difference will result in a 
>> `ZipException` with the following message:
>>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
>> got 7 bytes)
>>  
>> The correct way of copying all entries from one zip file into another 
>> requires the creation of a new `ZipEntry` or at
>> least resetting of the compressed size field. E.g.:
>>  while((entry = zis.getNextEntry()) != null) {
>>  ZipEntry newEntry = new ZipEntry(entry.getName());
>>  zos.putNextEntry(newEntry);
>>  zis.transferTo(zos);
>>  }
>> or:
>>  while((entry = zis.getNextEntry()) != null) {
>>  entry.setCompressedSize(-1);
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> Unfortunately, there's a lot of user code out there which gets this wrong 
>> and uses the bad coding pattern described
>> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
>> size (expected 12 but got 7 bytes)"` gives
>> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
>> instances of this anti-pattern on GitHub when
>> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
>> wrapper task][1] is affected as well as the
>> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
>> occurrences of this pattern in OpenJDK (see
>> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
>> (e.g.
>> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
>> :).  ### Description  So while this has
>> clearly been a problem before, it apparently wasn't painful enough to 
>> trigger any action from the side of the JDK.
>> However, recently quite some zlib forks with [superior deflate/inflate 
>> performance have evolved][6]. Using them with
>> OpenJDK is quite straight-forward: one just has to configure the alternative 
>> implementations by setting
>> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
>> using these new zlib implementations for
>> selected services in production and the only reason why we haven't enabled 
>> them by default until now is the problem
>> I've just described. The reason why these new libraries uncover the 
>> described anti-pattern much more often is because
>> their compression ratio is slightly different from that of the default zlib 
>> library. This can easily trigger a
>> `ZipException` even if an application is not using a different compression 
>> levels but just a zip file created with
>> another zlib version.  I'd therefore like to propose the following 
>> workaround for the wrong
>> `ZipOutputStream.putNextEntry()` usage in user code:
>> -  ignore the compressed size if it was implicitly determined from the zip 
>> file and not explicitly set by calling
>>`ZipEntry.setCompressedSize()`.
>> 
>> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
>> `JarOutputStream.putNextEntry()` to explain the
>>   problem and why `putNextEntry()` will ignore the compressed size of a 
>> `ZipEntry` if that was set implicitely when
>>   reading that entry from a `ZipFile` or `ZipInputStream`.
>> 
>> 
>> ### Technical Details
>> 
>> A zip file consists of a stream of File Entries followed by a Central 
>> Directory (see [here for a more detailed
>> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
>> followed by the compressed Data and an
>> optional Data Descriptor. The LFH contains the File Name and among other 
>> attributes the Compressed and Uncompressed
>> size and CRC of the Data. In the case where the latter three attributes are 
>> not available at the time when the LFH is
>> created, this fact will be recorded in a flag of the LFH and will trigger 

Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v4]

2020-10-13 Thread Lance Andersen
On Tue, 13 Oct 2020 18:46:59 GMT, Alan Bateman  wrote:

>> Volker Simonis has updated the pull request with a new target base due to a 
>> merge or a rebase. The pull request now
>> contains one commit:
>>   8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's 
>> compressed size
>
> src/java.base/share/classes/java/util/jar/JarOutputStream.java line 85:
> 
>> 83:  * ZipEntry#setCompressedSize(long)} method, then the compressed
>> 84:  * size will be set to the actual compressed size after deflation.
>> 85:  * 
> 
> I'm happy with the wording. What you think about put a p tag before "The 
> default compression method ..." so that it
> goes into its own paragraph? I'm asking because it's inconsistent to have the 
> first paragraph include the details on
> the compression method and the details on the current time pushed into a 
> second paragraph - if you generate the javadoc
> then you'll see what I mean.

I think that is a good idea to at the paragraph tag as you suggest

-

PR: https://git.openjdk.java.net/jdk/pull/520


RFR CSR: JDK-8254709 (Support for EdDSA signature scheme in JSSE)

2020-10-13 Thread Jamil Nimeh

Hi Folks,

I just put out the draft CSR for the RFE that adds EdDSA support in 
JSSE.  If anyone has some spare cycles to review this I'd appreciate it.


https://bugs.openjdk.java.net/browse/JDK-8254709

Thanks,

--Jamil



Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v4]

2020-10-13 Thread Lance Andersen
On Tue, 13 Oct 2020 11:40:36 GMT, Volker Simonis  wrote:

>> ### Summary
>> 
>> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
>> which can lead to the `ZipException "invalid
>> entry compressed size"`.
>> ### Motivation
>> 
>> In general it is not safe to directly write a ZipEntry obtained from 
>> `ZipInputStream.getNextEntry()`,
>> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
>> `ZipOutputStream.putNextEntry()` to a
>> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
>> and write it to the `ZipOutputStream` as
>> follows:
>>  ZipEntry entry;
>>  ZipInputStream zis = new ZipInputStream(...);
>>  ZipOutputStream zos = new ZipOutputStream(...);
>>  while((entry = zis.getNextEntry()) != null) {
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> The problem with this code is that the zip file format does not record the 
>> compression level used for deflation in its
>> entries. In general, it doesn't even mandate a predefined compression ratio 
>> per compression level. Therefore the
>> compressed size recorded in a `ZipEntry` read from a zip file might differ 
>> from the new compressed size produced by the
>> receiving `ZipOutputStream`. Such a difference will result in a 
>> `ZipException` with the following message:
>>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
>> got 7 bytes)
>>  
>> The correct way of copying all entries from one zip file into another 
>> requires the creation of a new `ZipEntry` or at
>> least resetting of the compressed size field. E.g.:
>>  while((entry = zis.getNextEntry()) != null) {
>>  ZipEntry newEntry = new ZipEntry(entry.getName());
>>  zos.putNextEntry(newEntry);
>>  zis.transferTo(zos);
>>  }
>> or:
>>  while((entry = zis.getNextEntry()) != null) {
>>  entry.setCompressedSize(-1);
>>  zos.putNextEntry(entry);
>>  zis.transferTo(zos);
>>  }
>> Unfortunately, there's a lot of user code out there which gets this wrong 
>> and uses the bad coding pattern described
>> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
>> size (expected 12 but got 7 bytes)"` gives
>> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
>> instances of this anti-pattern on GitHub when
>> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
>> wrapper task][1] is affected as well as the
>> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
>> occurrences of this pattern in OpenJDK (see
>> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
>> (e.g.
>> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
>> :).  ### Description  So while this has
>> clearly been a problem before, it apparently wasn't painful enough to 
>> trigger any action from the side of the JDK.
>> However, recently quite some zlib forks with [superior deflate/inflate 
>> performance have evolved][6]. Using them with
>> OpenJDK is quite straight-forward: one just has to configure the alternative 
>> implementations by setting
>> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
>> using these new zlib implementations for
>> selected services in production and the only reason why we haven't enabled 
>> them by default until now is the problem
>> I've just described. The reason why these new libraries uncover the 
>> described anti-pattern much more often is because
>> their compression ratio is slightly different from that of the default zlib 
>> library. This can easily trigger a
>> `ZipException` even if an application is not using a different compression 
>> levels but just a zip file created with
>> another zlib version.  I'd therefore like to propose the following 
>> workaround for the wrong
>> `ZipOutputStream.putNextEntry()` usage in user code:
>> -  ignore the compressed size if it was implicitly determined from the zip 
>> file and not explicitly set by calling
>>`ZipEntry.setCompressedSize()`.
>> 
>> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
>> `JarOutputStream.putNextEntry()` to explain the
>>   problem and why `putNextEntry()` will ignore the compressed size of a 
>> `ZipEntry` if that was set implicitely when
>>   reading that entry from a `ZipFile` or `ZipInputStream`.
>> 
>> 
>> ### Technical Details
>> 
>> A zip file consists of a stream of File Entries followed by a Central 
>> Directory (see [here for a more detailed
>> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
>> followed by the compressed Data and an
>> optional Data Descriptor. The LFH contains the File Name and among other 
>> attributes the Compressed and Uncompressed
>> size and CRC of the Data. In the case where the latter three attributes are 
>> not available at the time when the LFH is
>> created, this fact will be recorded in a flag of the LFH and will trigger 

Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v4]

2020-10-13 Thread Lance Andersen
On Wed, 7 Oct 2020 17:04:02 GMT, Lance Andersen  wrote:

>> I totally agree. What about using just the last sentence (as you've 
>> proposed) in the spec section and add the other to
>> as @implNote? O you think the last sentence will be enough?
>
> I think we can just go with the last sentence/paragraph.  Perhaps  we can 
> further simplify the  paragraph/sentence with
> something like:
> The compressed entry size will be recalculated  for compressed (DEFLATED) 
> entries when  ZipEntry::setCompressedSize has
> not been explicitly called on the ZipEntry.
> or
> 
> The compressed (DEFLATED) entry size will be recalculated when  
> ZipEntry::setCompressedSize has not been explicitly
> called on the ZipEntry.

I think the wording looks much better now

-

PR: https://git.openjdk.java.net/jdk/pull/520


Integrated: 8229867: Re-examine synchronization usages in http and https protocol handlers

2020-10-13 Thread Daniel Fuchs
On Thu, 8 Oct 2020 11:22:09 GMT, Daniel Fuchs  wrote:

> Hi,
> 
> This is a fix that upgrades the old HTTP and HTTPS legacy stack to use 
> virtual-thread friendly locking instead of
> synchronized monitors.
> Most of the changes are mechanical - but there are still a numbers of subtle 
> non-mechanical differences that are
> outlined below:
> 1. src/java.base/share/classes/sun/net/www/MessageHeader.java:
>`MessageHeader::print(PrintStream)` => synchronization modified to not 
> synchronize on this while printing
>( a snapshot of the data is taken before printing instead)
> 
> 2. src/java.base/share/classes/sun/net/www/MeteredStream.java:
> `MeteredStream::close` was missing synchronization: it is now protected 
> by the lock
> 
> 3. src/java.base/share/classes/sun/net/www/http/ChunkedOutputStream.java:
> `ChunkedOutputStream` no longer extends `PrintStream` but extends 
> `OutputStream` directly.
> Extending `PrintStream` is problematic for virtual thread. After careful 
> analysis, it appeared that
> `ChunkedOutputStream` didn't really need to extend `PrintStream`. 
> `ChunkedOutputStream`
> is already wrapping a `PrintStream`.  `ChunkedOutputStream` is never 
> returned directly to user
> code but is wrapped in another stream. `ChunkedOutputStream` completely 
> reimplement and
> reverse the flush logic implemented by its old super class`PrintStream` 
> which leads me to believe
> there was no real reason for it to extend `PrintStream` - except for 
> being able to call its `checkError`
> method - which can be done by using `instanceof ChunkedOutputStream` in 
> the caller instead of
> casting to PrintStream.
> 
> 4. src/java.base/share/classes/sun/net/www/http/HttpClient.java:
> Synchronization removed from `HttpClient::privilegedOpenServer` and 
> replaced by an `assert`.
> There is no need for a synchronized here as the method is only called 
> from a method that already
> holds the lock.
> 
> 5. src/java.base/share/classes/sun/net/www/http/KeepAliveCache.java:
> Synchronization removed from `KeepAliveCache::removeVector` and replaced 
> by an `assert`.
> This method is only called from methods already protected by the lock.
> Also the method has been made private.
> 
> 6. src/java.base/share/classes/sun/net/www/http/KeepAliveStream.java
> `queuedForCleanup` is made volatile as it is read and written directly 
> from outside the class
> and without protection by the `KeepAliveCleanerEntry`.
> Lock protection is also added to `close()`, which was missing it.
> Some methods that have no lock protection and did not need it because 
> only called from
> within code blocks protected by the lock have aquired an `assert 
> isLockHeldByCurrentThread();`
> 
> 7. Concrete subclasses of `AuthenticationInfo` that provide an implementation 
> for
> `AuthenticationInfo::setHeaders(HttpURLConnection conn, HeaderParser p, 
> String raw)` have
> acquired an `assert conn.isLockheldByCurrentThread();` as the method 
> should only be called
> from within a lock-protected block in `s.n.w.p.h.HttpURLConnection`
> 
> 8. 
> src/java.base/share/classes/sun/net/www/protocol/http/HttpURLConnection.java
> Several methods in this class have a acquired an `assert 
> isLockheldByCurrentThread();`
> to help track the fact that AuthenticationInfo::setHeaders is only called 
> while
> holding the lock.
> Synchronization was also removed from some method that didn't need it 
> because only
> called from within code blocks protected by the lock:
> `getOutputStream0()`: synchronization removed and replaced by an `assert 
> isLockheldByCurrentThread();`
> `setCookieHeader()`:  synchronization removed and replaced by an `assert 
> isLockheldByCurrentThread();`
> `getInputStream0()`:  synchronization removed and replaced by an `assert 
> isLockheldByCurrentThread();`
> `StreamingOutputStream`: small change to accomodate point 3. above 
> (changes in ChunkedOutputStream).
> 
> 9. 
> src/java.base/share/classes/sun/net/www/protocol/http/NegotiateAuthentication.java.:
> synchronization removed from `setHeaders` and replace by an assert. See 
> point 7. above.
> 
> 10. 
> src/java.base/unix/classes/sun/net/www/protocol/http/ntlm/NTLMAuthentication.java:
> synchronization removed from `setHeaders` and replace by an assert. See 
> point 7. above.
> 
> 11. 
> src/java.base/windows/classes/sun/net/www/protocol/http/ntlm/NTLMAuthentication.java:
> synchronization removed from `setHeaders` and replace by an assert. See 
> point 7. above.

This pull request has now been integrated.

Changeset: 65393a09
Author:Daniel Fuchs 
URL:   https://git.openjdk.java.net/jdk/commit/65393a09
Stats: 1126 lines in 18 files changed: 558 ins; 156 del; 412 mod

8229867: Re-examine synchronization usages in http and https protocol handlers

Reviewed-by: chegar, alanb, michaelm

-

PR: 

Re: RFR: 8242068: Signed JAR support for RSASSA-PSS and EdDSA [v6]

2020-10-13 Thread Weijun Wang
On Sun, 4 Oct 2020 14:09:26 GMT, Weijun Wang  wrote:

>> Changes requested by alanb (Reviewer).
>
> Note: I force pushed a new commit to correct a typo in the summary line.

Add support for [RFC 6211: Cryptographic Message Syntax (CMS) Algorithm 
Identifier Protection
Attribute](https://tools.ietf.org/html/rfc6211) to protect against algorithm 
substitution attacks. This attribute is
signed and it contains copies of digestAlgorithm and signatureAlgorithm which 
are unprotected in SignerInfo. Before
this enhancement, the two algorithms can be implied from the signature itself 
(i.e. if you change any of them the
signature size would not match or the key will not decrypt). However, with the 
introduction of RSASSA-PSS, the
signature algorithm can be modified and it still looks like the signature is 
valid. This particular case is [described
in the RFC](https://tools.ietf.org/html/rfc6211#page-5):

   signatureAlgorithm  has been protected by implication in the past.
  The use of an RSA public key implied that the RSA v1.5 signature
  algorithm was being used.  The hash algorithm and this fact could
  be checked by the internal padding defined.  This is no longer
  true with the addition of the RSA-PSS signature algorithms.

-

PR: https://git.openjdk.java.net/jdk/pull/322


Re: RFR: 8242068: Signed JAR support for RSASSA-PSS and EdDSA [v6]

2020-10-13 Thread Weijun Wang
> Major points in CSR at https://bugs.openjdk.java.net/browse/JDK-8245274:
> 
> - new sigalg "RSASSA-PSS", "EdDSA", "Ed25519" and "Ed448" can be used in 
> jarsigner
> 
> - The ".RSA" and ".EC" block extension types (PKCS #7 SignedData inside a 
> signed JAR) are reused for new signature
>   algorithms
> 
> - A new JarSigner property "directsign"
> 
> - Updating the jarsigner tool doc
> 
> Major code changes:
> 
> - Always use the signature algorithm directly as 
> SignerInfo::signatureAlgorithm. We used to use the encryption algorithm
>   there like RSA, DSA, and EC. Now it's always SHA1withRSA or RSASSA-PSS.
> 
> - Move signature related utilities methods from AlgorithmId.java to 
> SignatureUtil.java
> 
> - Add new SignatureUtil methods fromKey() and fromSignature() to simplify 
> creating Signature and getting its AlgorithmId
> 
> - Use the new methods in PKCS10, X509CertImpl, and X509CRLImpl signing
> 
> - Add a new (and intuitive, IMHO) PKCS7::generateNewSignedData capable of all 
> old and new signature algorithms
> 
> - Mark all -altsign related code deprecated and they can be removed once 
> ContentSigner is removed

Weijun Wang has refreshed the contents of this pull request, and previous 
commits have been removed. The incremental
views will show differences compared to the previous content of the PR. The 
pull request contains one new commit since
the last revision:

  support rfc6211 CMSAlgorithmProtection

-

Changes:
  - all: https://git.openjdk.java.net/jdk/pull/322/files
  - new: https://git.openjdk.java.net/jdk/pull/322/files/81513907..ffaae532

Webrevs:
 - full: https://webrevs.openjdk.java.net/?repo=jdk=322=05
 - incr: https://webrevs.openjdk.java.net/?repo=jdk=322=04-05

  Stats: 0 lines in 0 files changed: 0 ins; 0 del; 0 mod
  Patch: https://git.openjdk.java.net/jdk/pull/322.diff
  Fetch: git fetch https://git.openjdk.java.net/jdk pull/322/head:pull/322

PR: https://git.openjdk.java.net/jdk/pull/322


Re: RFR: 8242068: Signed JAR support for RSASSA-PSS and EdDSA [v6]

2020-10-13 Thread Weijun Wang
On Tue, 13 Oct 2020 13:24:50 GMT, Weijun Wang  wrote:

>> Note: I force pushed a new commit to correct a typo in the summary line.
>
> Add support for [RFC 6211: Cryptographic Message Syntax (CMS) Algorithm 
> Identifier Protection
> Attribute](https://tools.ietf.org/html/rfc6211) to protect against algorithm 
> substitution attacks. This attribute is
> signed and it contains copies of digestAlgorithm and signatureAlgorithm which 
> are unprotected in SignerInfo. Before
> this enhancement, the two algorithms can be implied from the signature itself 
> (i.e. if you change any of them the
> signature size would not match or the key will not decrypt). However, with 
> the introduction of RSASSA-PSS, the
> signature algorithm can be modified and it still looks like the signature is 
> valid. This particular case is [described
> in the RFC](https://tools.ietf.org/html/rfc6211#page-5):
>signatureAlgorithm  has been protected by implication in the past.
>   The use of an RSA public key implied that the RSA v1.5 signature
>   algorithm was being used.  The hash algorithm and this fact could
>   be checked by the internal padding defined.  This is no longer
>   true with the addition of the RSA-PSS signature algorithms.

A force push to fix the RFC number typo in the latest commit. No content update.

-

PR: https://git.openjdk.java.net/jdk/pull/322


RFR: 8254231: Implementation of Foreign Linker API (Incubator)

2020-10-13 Thread Maurizio Cimadamore
This patch contains the changes associated with the first incubation round of 
the foreign linker access API incubation
(see JEP 389 [1]). This work is meant to sit on top of the foreign memory 
access support (see JEP 393 [2] and
associated pull request [3]).

The main goal of this API is to provide a way to call native functions from 
Java code without the need of intermediate
JNI glue code. In order to do this, native calls are modeled through the 
MethodHandle API. I suggest reading the
writeup [4] I put together few weeks ago, which illustrates what the foreign 
linker support is, and how it should be
used by clients.

Disclaimer: the pull request mechanism isn't great at managing *dependent* 
reviews. For this reasons, I'm attaching a
webrev which contains only the differences between this PR and the memory 
access PR. I will be periodically uploading
new webrevs, as new iterations come out, to try and make the life of reviewers 
as simple as possible.

A big thank to Jorn Vernee and Vladimir Ivanov - they are the main architects 
of all the hotspot changes you see here,
and without their help, the foreign linker support wouldn't be what it is 
today. As usual, a big thank to Paul Sandoz,
who provided many insights (often by trying the bits first hand).

Thanks
Maurizio

Webrev:
http://cr.openjdk.java.net/~mcimadamore/8254231_v1/webrev

Javadoc:

http://cr.openjdk.java.net/~mcimadamore/8254231_v1/javadoc/jdk/incubator/foreign/package-summary.html

Specdiff (relative to [3]):

http://cr.openjdk.java.net/~mcimadamore/8254231_v1/specdiff_delta/overview-summary.html

CSR:

https://bugs.openjdk.java.net/browse/JDK-8254232



### API Changes

The API changes are actually rather slim:

* `LibraryLookup`
  * This class allows clients to lookup symbols in native libraries; the 
interface is fairly simple; you can load a library
by name, or absolute path, and then lookup symbols on that library.
* `FunctionDescriptor`
  * This is an abstraction that is very similar, in spirit, to `MethodType`; it 
is, at its core, an aggregate of memory
layouts for the function arguments/return type. A function descriptor is 
used to describe the signature of a native
function.
* `CLinker`
  * This is the real star of the show. A `CLinker` has two main methods: 
`downcallHandle` and `upcallStub`; the first takes
a native symbol (as obtained from `LibraryLookup`), a `MethodType` and a 
`FunctionDescriptor` and returns a
`MethodHandle` instance which can be used to call the target native symbol. 
The second takes an existing method handle,
and a `FunctionDescriptor` and returns a new `MemorySegment` corresponding 
to a code stub allocated by the VM which
acts as a trampoline from native code to the user-provided method handle. 
This is very useful for implementing upcalls.
   * This class also contains the various layout constants that should be used 
by clients when describing native signatures
 (e.g. `C_LONG` and friends); these layouts contain additional ABI 
classfication information (in the form of layout
 attributes) which is used by the runtime to *infer* how Java arguments 
should be shuffled for the native call to take
 place.
  * Finally, this class provides some helper functions e.g. so that clients can 
convert Java strings into C strings and
back.
* `NativeScope`
  * This is an helper class which allows clients to group together logically 
related allocations; that is, rather than
allocating separate memory segments using separate *try-with-resource* 
constructs, a `NativeScope` allows clients to
use a _single_ block, and allocate all the required segments there. This is 
not only an usability boost, but also a
performance boost, since not all allocation requests will be turned into 
`malloc` calls.
* `MemorySegment`
  * Only one method added here - namely `handoff(NativeScope)` which allows a 
segment to be transferred onto an existing
native scope.

### Safety

The foreign linker API is intrinsically unsafe; many things can go wrong when 
requesting a native method handle. For
instance, the description of the native signature might be wrong (e.g. have too 
many arguments) - and the runtime has,
in the general case, no way to detect such mismatches. For these reasons, 
obtaining a `CLinker` instance is
a *restricted* operation, which can be enabled by specifying the usual JDK 
property `-Dforeign.restricted=permit` (as
it's the case for other restricted method in the foreign memory API).

### Implementation changes

The Java changes associated with `LibraryLookup` are relative straightforward; 
the only interesting thing to note here
is that library loading does _not_ depend on class loaders, so `LibraryLookup` 
is not subject to the same restrictions
which apply to JNI library loading (e.g. same library cannot be loaded by 
different classloaders).

As for `NativeScope` the changes are again relatively straightforward; it is an 
API which sits neatly on 

Re: RFR: 8229867: Re-examine synchronization usages in http and https protocol handlers [v3]

2020-10-13 Thread Chris Hegarty
On Mon, 12 Oct 2020 13:50:30 GMT, Daniel Fuchs  wrote:

>> Hi,
>> 
>> This is a fix that upgrades the old HTTP and HTTPS legacy stack to use 
>> virtual-thread friendly locking instead of
>> synchronized monitors.
>> Most of the changes are mechanical - but there are still a numbers of subtle 
>> non-mechanical differences that are
>> outlined below:
>> 1. src/java.base/share/classes/sun/net/www/MessageHeader.java:
>>`MessageHeader::print(PrintStream)` => synchronization modified to not 
>> synchronize on this while printing
>>( a snapshot of the data is taken before printing instead)
>> 
>> 2. src/java.base/share/classes/sun/net/www/MeteredStream.java:
>> `MeteredStream::close` was missing synchronization: it is now protected 
>> by the lock
>> 
>> 3. src/java.base/share/classes/sun/net/www/http/ChunkedOutputStream.java:
>> `ChunkedOutputStream` no longer extends `PrintStream` but extends 
>> `OutputStream` directly.
>> Extending `PrintStream` is problematic for virtual thread. After careful 
>> analysis, it appeared that
>> `ChunkedOutputStream` didn't really need to extend `PrintStream`. 
>> `ChunkedOutputStream`
>> is already wrapping a `PrintStream`.  `ChunkedOutputStream` is never 
>> returned directly to user
>> code but is wrapped in another stream. `ChunkedOutputStream` completely 
>> reimplement and
>> reverse the flush logic implemented by its old super class`PrintStream` 
>> which leads me to believe
>> there was no real reason for it to extend `PrintStream` - except for 
>> being able to call its `checkError`
>> method - which can be done by using `instanceof ChunkedOutputStream` in 
>> the caller instead of
>> casting to PrintStream.
>> 
>> 4. src/java.base/share/classes/sun/net/www/http/HttpClient.java:
>> Synchronization removed from `HttpClient::privilegedOpenServer` and 
>> replaced by an `assert`.
>> There is no need for a synchronized here as the method is only called 
>> from a method that already
>> holds the lock.
>> 
>> 5. src/java.base/share/classes/sun/net/www/http/KeepAliveCache.java:
>> Synchronization removed from `KeepAliveCache::removeVector` and replaced 
>> by an `assert`.
>> This method is only called from methods already protected by the lock.
>> Also the method has been made private.
>> 
>> 6. src/java.base/share/classes/sun/net/www/http/KeepAliveStream.java
>> `queuedForCleanup` is made volatile as it is read and written directly 
>> from outside the class
>> and without protection by the `KeepAliveCleanerEntry`.
>> Lock protection is also added to `close()`, which was missing it.
>> Some methods that have no lock protection and did not need it because 
>> only called from
>> within code blocks protected by the lock have aquired an `assert 
>> isLockHeldByCurrentThread();`
>> 
>> 7. Concrete subclasses of `AuthenticationInfo` that provide an 
>> implementation for
>> `AuthenticationInfo::setHeaders(HttpURLConnection conn, HeaderParser p, 
>> String raw)` have
>> acquired an `assert conn.isLockheldByCurrentThread();` as the method 
>> should only be called
>> from within a lock-protected block in `s.n.w.p.h.HttpURLConnection`
>> 
>> 8. 
>> src/java.base/share/classes/sun/net/www/protocol/http/HttpURLConnection.java
>> Several methods in this class have a acquired an `assert 
>> isLockheldByCurrentThread();`
>> to help track the fact that AuthenticationInfo::setHeaders is only 
>> called while
>> holding the lock.
>> Synchronization was also removed from some method that didn't need it 
>> because only
>> called from within code blocks protected by the lock:
>> `getOutputStream0()`: synchronization removed and replaced by an `assert 
>> isLockheldByCurrentThread();`
>> `setCookieHeader()`:  synchronization removed and replaced by an `assert 
>> isLockheldByCurrentThread();`
>> `getInputStream0()`:  synchronization removed and replaced by an `assert 
>> isLockheldByCurrentThread();`
>> `StreamingOutputStream`: small change to accomodate point 3. above 
>> (changes in ChunkedOutputStream).
>> 
>> 9. 
>> src/java.base/share/classes/sun/net/www/protocol/http/NegotiateAuthentication.java.:
>> synchronization removed from `setHeaders` and replace by an assert. See 
>> point 7. above.
>> 
>> 10. 
>> src/java.base/unix/classes/sun/net/www/protocol/http/ntlm/NTLMAuthentication.java:
>> synchronization removed from `setHeaders` and replace by an assert. See 
>> point 7. above.
>> 
>> 11. 
>> src/java.base/windows/classes/sun/net/www/protocol/http/ntlm/NTLMAuthentication.java:
>> synchronization removed from `setHeaders` and replace by an assert. See 
>> point 7. above.
>
> Daniel Fuchs has updated the pull request with a new target base due to a 
> merge or a rebase. The incremental webrev
> excludes the unrelated changes brought in by the merge/rebase. The pull 
> request contains four additional commits since
> the last revision:

Re: RFR: 8242068: Signed JAR support for RSASSA-PSS and EdDSA [v5]

2020-10-13 Thread Weijun Wang
> Major points in CSR at https://bugs.openjdk.java.net/browse/JDK-8245274:
> 
> - new sigalg "RSASSA-PSS", "EdDSA", "Ed25519" and "Ed448" can be used in 
> jarsigner
> 
> - The ".RSA" and ".EC" block extension types (PKCS #7 SignedData inside a 
> signed JAR) are reused for new signature
>   algorithms
> 
> - A new JarSigner property "directsign"
> 
> - Updating the jarsigner tool doc
> 
> Major code changes:
> 
> - Always use the signature algorithm directly as 
> SignerInfo::signatureAlgorithm. We used to use the encryption algorithm
>   there like RSA, DSA, and EC. Now it's always SHA1withRSA or RSASSA-PSS.
> 
> - Move signature related utilities methods from AlgorithmId.java to 
> SignatureUtil.java
> 
> - Add new SignatureUtil methods fromKey() and fromSignature() to simplify 
> creating Signature and getting its AlgorithmId
> 
> - Use the new methods in PKCS10, X509CertImpl, and X509CRLImpl signing
> 
> - Add a new (and intuitive, IMHO) PKCS7::generateNewSignedData capable of all 
> old and new signature algorithms
> 
> - Mark all -altsign related code deprecated and they can be removed once 
> ContentSigner is removed

Weijun Wang has updated the pull request incrementally with one additional 
commit since the last revision:

  support rfc5652 CMSAlgorithmProtection

-

Changes:
  - all: https://git.openjdk.java.net/jdk/pull/322/files
  - new: https://git.openjdk.java.net/jdk/pull/322/files/fecf30d7..81513907

Webrevs:
 - full: https://webrevs.openjdk.java.net/?repo=jdk=322=04
 - incr: https://webrevs.openjdk.java.net/?repo=jdk=322=03-04

  Stats: 72 lines in 5 files changed: 65 ins; 0 del; 7 mod
  Patch: https://git.openjdk.java.net/jdk/pull/322.diff
  Fetch: git fetch https://git.openjdk.java.net/jdk pull/322/head:pull/322

PR: https://git.openjdk.java.net/jdk/pull/322


Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v3]

2020-10-13 Thread Volker Simonis
On Mon, 12 Oct 2020 15:42:45 GMT, Lance Andersen  wrote:

>> Volker Simonis has refreshed the contents of this pull request, and previous 
>> commits have been removed. The incremental
>> views will show differences compared to the previous content of the PR. The 
>> pull request contains one new commit since
>> the last revision:
>>   8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's 
>> compressed size
>
> I think we are getting close.Once the javadoc is cleaned up along with 
> the minor changes recommended  we should be
> in good shape and can then review the CSR

I hope I've addressed all your comments and suggestion with the latest PR. I've 
also updated the CSR to reflect the
latest version of the API changes from the PR. Please have a look.

Thank you and best regards,
Volker

-

PR: https://git.openjdk.java.net/jdk/pull/520


Re: RFR: 8253952: Refine ZipOutputStream.putNextEntry() to recalculate ZipEntry's compressed size [v4]

2020-10-13 Thread Volker Simonis
> ### Summary
> 
> Work around wrong usage of `ZipOutputStream.putNextEntry()` in user code 
> which can lead to the `ZipException "invalid
> entry compressed size"`.
> ### Motivation
> 
> In general it is not safe to directly write a ZipEntry obtained from 
> `ZipInputStream.getNextEntry()`,
> `ZipFile.entries()`, `ZipFile.getEntry()` or `ZipFile.stream()` with 
> `ZipOutputStream.putNextEntry()` to a
> `ZipOutputStream` and then read the entries data from the `ZipInputStream` 
> and write it to the `ZipOutputStream` as
> follows:
>  ZipEntry entry;
>  ZipInputStream zis = new ZipInputStream(...);
>  ZipOutputStream zos = new ZipOutputStream(...);
>  while((entry = zis.getNextEntry()) != null) {
>  zos.putNextEntry(entry);
>  zis.transferTo(zos);
>  }
> The problem with this code is that the zip file format does not record the 
> compression level used for deflation in its
> entries. In general, it doesn't even mandate a predefined compression ratio 
> per compression level. Therefore the
> compressed size recorded in a `ZipEntry` read from a zip file might differ 
> from the new compressed size produced by the
> receiving `ZipOutputStream`. Such a difference will result in a 
> `ZipException` with the following message:
>  java.util.zip.ZipException: invalid entry compressed size (expected 12 but 
> got 7 bytes)
>  
> The correct way of copying all entries from one zip file into another 
> requires the creation of a new `ZipEntry` or at
> least resetting of the compressed size field. E.g.:
>  while((entry = zis.getNextEntry()) != null) {
>  ZipEntry newEntry = new ZipEntry(entry.getName());
>  zos.putNextEntry(newEntry);
>  zis.transferTo(zos);
>  }
> or:
>  while((entry = zis.getNextEntry()) != null) {
>  entry.setCompressedSize(-1);
>  zos.putNextEntry(entry);
>  zis.transferTo(zos);
>  }
> Unfortunately, there's a lot of user code out there which gets this wrong and 
> uses the bad coding pattern described
> before. Searching for `"java.util.zip.ZipException: invalid entry compressed 
> size (expected 12 but got 7 bytes)"` gives
> ~2500 hits (~100 on StackOverflow). It's also no hard to find plenty of 
> instances of this anti-pattern on GitHub when
> doing a code search for `ZipEntry` and `putNextEntry()`. E.g. [Gradle 4.x 
> wrapper task][1] is affected as well as the
> latest version of the [mockableAndroidJar task][2]. I've recently fixed two 
> occurrences of this pattern in OpenJDK (see
> [JDK-8240333][3] and [JDK-8240235][4]) but there still exist more of them 
> (e.g.
> [`test/jdk/java/util/zip/ZipFile/CopyJar.java`][5] which is there since 1999 
> :).  ### Description  So while this has
> clearly been a problem before, it apparently wasn't painful enough to trigger 
> any action from the side of the JDK.
> However, recently quite some zlib forks with [superior deflate/inflate 
> performance have evolved][6]. Using them with
> OpenJDK is quite straight-forward: one just has to configure the alternative 
> implementations by setting
> `LD_LIBRARY_PATH` or `LD_PRELOAD` correspondingly. We've seen big saving by 
> using these new zlib implementations for
> selected services in production and the only reason why we haven't enabled 
> them by default until now is the problem
> I've just described. The reason why these new libraries uncover the described 
> anti-pattern much more often is because
> their compression ratio is slightly different from that of the default zlib 
> library. This can easily trigger a
> `ZipException` even if an application is not using a different compression 
> levels but just a zip file created with
> another zlib version.  I'd therefore like to propose the following workaround 
> for the wrong
> `ZipOutputStream.putNextEntry()` usage in user code:
> -  ignore the compressed size if it was implicitly determined from the zip 
> file and not explicitly set by calling
>`ZipEntry.setCompressedSize()`.
> 
> - Change the API-documentation of `ZipOutputStream.putNextEntry()` and 
> `JarOutputStream.putNextEntry()` to explain the
>   problem and why `putNextEntry()` will ignore the compressed size of a 
> `ZipEntry` if that was set implicitely when
>   reading that entry from a `ZipFile` or `ZipInputStream`.
> 
> 
> ### Technical Details
> 
> A zip file consists of a stream of File Entries followed by a Central 
> Directory (see [here for a more detailed
> specification][7]). Each File Entry is composed of a Local File Header (LFH) 
> followed by the compressed Data and an
> optional Data Descriptor. The LFH contains the File Name and among other 
> attributes the Compressed and Uncompressed
> size and CRC of the Data. In the case where the latter three attributes are 
> not available at the time when the LFH is
> created, this fact will be recorded in a flag of the LFH and will trigger the 
> creation of a Data Descriptor with the
> corresponding information right after the Data section. Finally, the Central 
> Directory contains one Central 

Re: RFR: 8254162: Implementation of Foreign-Memory Access API (Third Incubator) [v11]

2020-10-13 Thread Maurizio Cimadamore
> This patch contains the changes associated with the third incubation round of 
> the foreign memory access API incubation
> (see JEP 393 [1]). This iteration focus on improving the usability of the API 
> in 3 main ways:
> * first, by providing a way to obtain truly *shared* segments, which can be 
> accessed and closed concurrently from
>   multiple threads
> * second, by providing a way to register a memory segment against a 
> `Cleaner`, so as to have some (optional) guarantee
>   that the memory will be deallocated, eventually
> * third, by not requiring users to dive deep into var handles when they first 
> pick up the API; a new `MemoryAccess` class
>   has been added, which defines several useful dereference routines; these 
> are really just thin wrappers around memory
>   access var handles, but they make the barrier of entry for using this API 
> somewhat lower.
> 
> A big conceptual shift that comes with this API refresh is that the role of 
> `MemorySegment` and `MemoryAddress` is not
> the same as it used to be; it used to be the case that a memory address could 
> (sometimes, not always) have a back link
> to the memory segment which originated it; additionally, memory access var 
> handles used `MemoryAddress` as a basic unit
> of dereference.  This has all changed as per this API refresh;  now a 
> `MemoryAddress` is just a dumb carrier which
> wraps a pair of object/long addressing coordinates; `MemorySegment` has 
> become the star of the show, as far as
> dereferencing memory is concerned. You cannot dereference memory if you don't 
> have a segment. This improves usability
> in a number of ways - first, it is a lot easier to wrap native addresses 
> (`long`, essentially) into a `MemoryAddress`;
> secondly, it is crystal clear what a client has to do in order to dereference 
> memory: if a client has a segment, it can
> use that; otherwise, if the client only has an address, it will have to 
> create a segment *unsafely* (this can be done
> by calling `MemoryAddress::asSegmentRestricted`).  A list of the API, 
> implementation and test changes is provided
> below. If  you have any questions, or need more detailed explanations, I (and 
> the  rest of the Panama team) will be
> happy to point at existing discussions,  and/or to provide the feedback 
> required.   A big thank to Erik Osterlund,
> Vladimir Ivanov and David Holmes, without whom the work on shared memory 
> segment would not have been possible; also I'd
> like to thank Paul Sandoz, whose insights on API design have been very 
> helpful in this journey.  Thanks  Maurizio
> Javadoc:   
> http://cr.openjdk.java.net/~mcimadamore/8254162_v1/javadoc/jdk/incubator/foreign/package-summary.html
> Specdiff:
> 
> http://cr.openjdk.java.net/~mcimadamore/8254162_v1/specdiff/jdk/incubator/foreign/package-summary.html
> 
> CSR:
> 
> https://bugs.openjdk.java.net/browse/JDK-8254163
> 
> 
> 
> ### API Changes
> 
> * `MemorySegment`
>   * drop factory for restricted segment (this has been moved to 
> `MemoryAddress`, see below)
>   * added a no-arg factory for a native restricted segment representing 
> entire native heap
>   * rename `withOwnerThread` to `handoff`
>   * add new `share` method, to create shared segments
>   * add new `registerCleaner` method, to register a segment against a cleaner
>   * add more helpers to create arrays from a segment e.g. `toIntArray`
>   * add some `asSlice` overloads (to make up for the fact that now segments 
> are more frequently used as cursors)
>   * rename `baseAddress` to `address` (so that `MemorySegment` can implement 
> `Addressable`)
> * `MemoryAddress`
>   * drop `segment` accessor
>   * drop `rebase` method and replace it with `segmentOffset` which returns 
> the offset (a `long`) of this address relative
> to a given segment
> * `MemoryAccess`
>   * New class supporting several static dereference helpers; the helpers are 
> organized by carrier and access mode, where a
> carrier is one of the usual suspect (a Java primitive, minus `boolean`); 
> the access mode can be simple (e.g. access
> base address of given segment), or indexed, in which case the accessor 
> takes a segment and either a low-level byte
> offset,or a high level logical index. The classification is reflected in 
> the naming scheme (e.g. `getByte` vs.
> `getByteAtOffset` vs `getByteAtIndex`).
> * `MemoryHandles`
>   * drop `withOffset` combinator
>   * drop `withStride` combinator
>   * the basic memory access handle factory now returns a var handle which 
> takes a `MemorySegment` and a `long` - from which
> it is easy to derive all the other handles using plain var handle 
> combinators.
> * `Addressable`
>   * This is a new interface which is attached to entities which can be 
> projected to a `MemoryAddress`. For now, both
> `MemoryAddress` and `MemorySegment` implement it; we have plans, with JEP 
> 389 [2] to add more implementations. Clients
> can largely ignore this interface, which 

Re: RFR: 8254162: Implementation of Foreign-Memory Access API (Third Incubator) [v10]

2020-10-13 Thread Maurizio Cimadamore
> This patch contains the changes associated with the third incubation round of 
> the foreign memory access API incubation
> (see JEP 393 [1]). This iteration focus on improving the usability of the API 
> in 3 main ways:
> * first, by providing a way to obtain truly *shared* segments, which can be 
> accessed and closed concurrently from
>   multiple threads
> * second, by providing a way to register a memory segment against a 
> `Cleaner`, so as to have some (optional) guarantee
>   that the memory will be deallocated, eventually
> * third, by not requiring users to dive deep into var handles when they first 
> pick up the API; a new `MemoryAccess` class
>   has been added, which defines several useful dereference routines; these 
> are really just thin wrappers around memory
>   access var handles, but they make the barrier of entry for using this API 
> somewhat lower.
> 
> A big conceptual shift that comes with this API refresh is that the role of 
> `MemorySegment` and `MemoryAddress` is not
> the same as it used to be; it used to be the case that a memory address could 
> (sometimes, not always) have a back link
> to the memory segment which originated it; additionally, memory access var 
> handles used `MemoryAddress` as a basic unit
> of dereference.  This has all changed as per this API refresh;  now a 
> `MemoryAddress` is just a dumb carrier which
> wraps a pair of object/long addressing coordinates; `MemorySegment` has 
> become the star of the show, as far as
> dereferencing memory is concerned. You cannot dereference memory if you don't 
> have a segment. This improves usability
> in a number of ways - first, it is a lot easier to wrap native addresses 
> (`long`, essentially) into a `MemoryAddress`;
> secondly, it is crystal clear what a client has to do in order to dereference 
> memory: if a client has a segment, it can
> use that; otherwise, if the client only has an address, it will have to 
> create a segment *unsafely* (this can be done
> by calling `MemoryAddress::asSegmentRestricted`).  A list of the API, 
> implementation and test changes is provided
> below. If  you have any questions, or need more detailed explanations, I (and 
> the  rest of the Panama team) will be
> happy to point at existing discussions,  and/or to provide the feedback 
> required.   A big thank to Erik Osterlund,
> Vladimir Ivanov and David Holmes, without whom the work on shared memory 
> segment would not have been possible; also I'd
> like to thank Paul Sandoz, whose insights on API design have been very 
> helpful in this journey.  Thanks  Maurizio
> Javadoc:   
> http://cr.openjdk.java.net/~mcimadamore/8254162_v1/javadoc/jdk/incubator/foreign/package-summary.html
> Specdiff:
> 
> http://cr.openjdk.java.net/~mcimadamore/8254162_v1/specdiff/jdk/incubator/foreign/package-summary.html
> 
> CSR:
> 
> https://bugs.openjdk.java.net/browse/JDK-8254163
> 
> 
> 
> ### API Changes
> 
> * `MemorySegment`
>   * drop factory for restricted segment (this has been moved to 
> `MemoryAddress`, see below)
>   * added a no-arg factory for a native restricted segment representing 
> entire native heap
>   * rename `withOwnerThread` to `handoff`
>   * add new `share` method, to create shared segments
>   * add new `registerCleaner` method, to register a segment against a cleaner
>   * add more helpers to create arrays from a segment e.g. `toIntArray`
>   * add some `asSlice` overloads (to make up for the fact that now segments 
> are more frequently used as cursors)
>   * rename `baseAddress` to `address` (so that `MemorySegment` can implement 
> `Addressable`)
> * `MemoryAddress`
>   * drop `segment` accessor
>   * drop `rebase` method and replace it with `segmentOffset` which returns 
> the offset (a `long`) of this address relative
> to a given segment
> * `MemoryAccess`
>   * New class supporting several static dereference helpers; the helpers are 
> organized by carrier and access mode, where a
> carrier is one of the usual suspect (a Java primitive, minus `boolean`); 
> the access mode can be simple (e.g. access
> base address of given segment), or indexed, in which case the accessor 
> takes a segment and either a low-level byte
> offset,or a high level logical index. The classification is reflected in 
> the naming scheme (e.g. `getByte` vs.
> `getByteAtOffset` vs `getByteAtIndex`).
> * `MemoryHandles`
>   * drop `withOffset` combinator
>   * drop `withStride` combinator
>   * the basic memory access handle factory now returns a var handle which 
> takes a `MemorySegment` and a `long` - from which
> it is easy to derive all the other handles using plain var handle 
> combinators.
> * `Addressable`
>   * This is a new interface which is attached to entities which can be 
> projected to a `MemoryAddress`. For now, both
> `MemoryAddress` and `MemorySegment` implement it; we have plans, with JEP 
> 389 [2] to add more implementations. Clients
> can largely ignore this interface, which 

Re: RFR: 8254162: Implementation of Foreign-Memory Access API (Third Incubator) [v9]

2020-10-13 Thread Maurizio Cimadamore
> This patch contains the changes associated with the third incubation round of 
> the foreign memory access API incubation
> (see JEP 393 [1]). This iteration focus on improving the usability of the API 
> in 3 main ways:
> * first, by providing a way to obtain truly *shared* segments, which can be 
> accessed and closed concurrently from
>   multiple threads
> * second, by providing a way to register a memory segment against a 
> `Cleaner`, so as to have some (optional) guarantee
>   that the memory will be deallocated, eventually
> * third, by not requiring users to dive deep into var handles when they first 
> pick up the API; a new `MemoryAccess` class
>   has been added, which defines several useful dereference routines; these 
> are really just thin wrappers around memory
>   access var handles, but they make the barrier of entry for using this API 
> somewhat lower.
> 
> A big conceptual shift that comes with this API refresh is that the role of 
> `MemorySegment` and `MemoryAddress` is not
> the same as it used to be; it used to be the case that a memory address could 
> (sometimes, not always) have a back link
> to the memory segment which originated it; additionally, memory access var 
> handles used `MemoryAddress` as a basic unit
> of dereference.  This has all changed as per this API refresh;  now a 
> `MemoryAddress` is just a dumb carrier which
> wraps a pair of object/long addressing coordinates; `MemorySegment` has 
> become the star of the show, as far as
> dereferencing memory is concerned. You cannot dereference memory if you don't 
> have a segment. This improves usability
> in a number of ways - first, it is a lot easier to wrap native addresses 
> (`long`, essentially) into a `MemoryAddress`;
> secondly, it is crystal clear what a client has to do in order to dereference 
> memory: if a client has a segment, it can
> use that; otherwise, if the client only has an address, it will have to 
> create a segment *unsafely* (this can be done
> by calling `MemoryAddress::asSegmentRestricted`).  A list of the API, 
> implementation and test changes is provided
> below. If  you have any questions, or need more detailed explanations, I (and 
> the  rest of the Panama team) will be
> happy to point at existing discussions,  and/or to provide the feedback 
> required.   A big thank to Erik Osterlund,
> Vladimir Ivanov and David Holmes, without whom the work on shared memory 
> segment would not have been possible; also I'd
> like to thank Paul Sandoz, whose insights on API design have been very 
> helpful in this journey.  Thanks  Maurizio
> Javadoc:   
> http://cr.openjdk.java.net/~mcimadamore/8254162_v1/javadoc/jdk/incubator/foreign/package-summary.html
> Specdiff:
> 
> http://cr.openjdk.java.net/~mcimadamore/8254162_v1/specdiff/jdk/incubator/foreign/package-summary.html
> 
> CSR:
> 
> https://bugs.openjdk.java.net/browse/JDK-8254163
> 
> 
> 
> ### API Changes
> 
> * `MemorySegment`
>   * drop factory for restricted segment (this has been moved to 
> `MemoryAddress`, see below)
>   * added a no-arg factory for a native restricted segment representing 
> entire native heap
>   * rename `withOwnerThread` to `handoff`
>   * add new `share` method, to create shared segments
>   * add new `registerCleaner` method, to register a segment against a cleaner
>   * add more helpers to create arrays from a segment e.g. `toIntArray`
>   * add some `asSlice` overloads (to make up for the fact that now segments 
> are more frequently used as cursors)
>   * rename `baseAddress` to `address` (so that `MemorySegment` can implement 
> `Addressable`)
> * `MemoryAddress`
>   * drop `segment` accessor
>   * drop `rebase` method and replace it with `segmentOffset` which returns 
> the offset (a `long`) of this address relative
> to a given segment
> * `MemoryAccess`
>   * New class supporting several static dereference helpers; the helpers are 
> organized by carrier and access mode, where a
> carrier is one of the usual suspect (a Java primitive, minus `boolean`); 
> the access mode can be simple (e.g. access
> base address of given segment), or indexed, in which case the accessor 
> takes a segment and either a low-level byte
> offset,or a high level logical index. The classification is reflected in 
> the naming scheme (e.g. `getByte` vs.
> `getByteAtOffset` vs `getByteAtIndex`).
> * `MemoryHandles`
>   * drop `withOffset` combinator
>   * drop `withStride` combinator
>   * the basic memory access handle factory now returns a var handle which 
> takes a `MemorySegment` and a `long` - from which
> it is easy to derive all the other handles using plain var handle 
> combinators.
> * `Addressable`
>   * This is a new interface which is attached to entities which can be 
> projected to a `MemoryAddress`. For now, both
> `MemoryAddress` and `MemorySegment` implement it; we have plans, with JEP 
> 389 [2] to add more implementations. Clients
> can largely ignore this interface, which