New candidate JEP: 358: Helpful NullPointerExceptions

2019-07-26 Thread mark . reinhold
https://openjdk.java.net/jeps/358

- Mark


Review Request: JDK-8228671: Fastdebug VM throws InternalError when publicLookup.in(T) is used to resolve a member

2019-07-26 Thread Mandy Chung
Debug VM checks if a class is accessible to the lookup class except if 
the lookup class is java.lang.Object (which was the lookup class of 
publicLookup previously). WithJDK-8173978, Lookup::in has changed and it 
can be used to create a new public Lookup on a different lookup class.


A quick fix for this bug is to pass Object.class for resolution for a 
Lookup object with UNCONDITIONAL mode as previously.  The lookup class 
and allowed modes are used to check if the resolved member is accessible 
to this Lookup object.  We should re-examine this area in particular 
publicLookup (see JDK-8173977).


diff --git 
a/src/java.base/share/classes/java/lang/invoke/MethodHandles.java 
b/src/java.base/share/classes/java/lang/invoke/MethodHandles.java

--- a/src/java.base/share/classes/java/lang/invoke/MethodHandles.java
+++ b/src/java.base/share/classes/java/lang/invoke/MethodHandles.java
@@ -1329,7 +1329,14 @@

 // This is just for calling out to MethodHandleImpl.
 private Class lookupClassOrNull() {
-    return (allowedModes == TRUSTED) ? null : lookupClass;
+    if (allowedModes == TRUSTED) {
+    return null;
+    }
+    if (allowedModes == UNCONDITIONAL) {
+    // use Object as the caller to pass to VM doing resolution
+    return Object.class;
+    }
+    return lookupClass;
 }

 /** Tells which access-protection classes of members this 
lookup object can produce.


Mandy


Re: 8078891: java.io.SequenceInputStream.close is not atomic and not idempotent

2019-07-26 Thread Pavel Rappo
> On 26 Jul 2019, at 19:36, Alan Bateman  wrote:
> 
> 
> 
> In any case, the change looks okay but it might be simpler to restructure to 
> use:
> 
> while (in != null) {
> try { in.close() } catch (IOException e) { ... }
> peekNextStream();
> }
> 
> to simplify the error handling and avoid mixing nextStream and 
> peekNextStream. There is also some curious code in peekNextStream where it 
> handles nextElement returning null - I don't know the history on that but it 
> looks like it's trying to handling a broken Enumeration.

For the record. If we change this as you suggested, the code will behave
differently in the case of a single `null` element found in the midst of
iteration (broken enumeration?). The stream won't be able to close after 
running into it. But this most likely is a programming error anyway. 

Not sure.



Re: 8078891: java.io.SequenceInputStream.close is not atomic and not idempotent

2019-07-26 Thread Pavel Rappo
> On 26 Jul 2019, at 19:36, Alan Bateman  wrote:
> 
> The bug summary might be a big misleading as SequenceInputStream isn't thread 
> safe and close isn't atomic. The changes are really to attempt to close all 
> the remaining streams rather than bailing out when closing one of them fails.

I'm not a native English speaker and I used this word as a synonym to
"indivisible". In this particular case "all or nothing". Am I using it wrong?
Or are you saying that the word "atomic" is owned exclusively by the concurrency
domain? What word (or phrase) would you use in this case?

The aim of the subject line is to convey the crux of the issue succinctly. I
hope no one goes and fixes the issue after reading just the subject line without
looking any further into "details" and "description".

> In any case, the change looks okay but it might be simpler to restructure to 
> use:
> 
> while (in != null) {
> try { in.close() } catch (IOException e) { ... }
> peekNextStream();
> }
> 
> to simplify the error handling and avoid mixing nextStream and peekNextStream.

Possibly.

> There is also some curious code in peekNextStream where it handles 
> nextElement returning null - I don't know the history on that but it looks 
> like it's trying to handling a broken Enumeration.

I wouldn't touch that code. In the case of `peekNextStream` we could only remove
the needless cast (generics) without compromising compatibility.



Re: 8078891: java.io.SequenceInputStream.close is not atomic and not idempotent

2019-07-26 Thread Brian Burkhalter


> On Jul 26, 2019, at 11:36 AM, Alan Bateman  wrote:
> 
> On 26/07/2019 16:41, Brian Burkhalter wrote:
>> :
>> Please see the updated patch which switches the implementation to the 
>> amended version and incorporates the proposed change to the test:
>> 
>> http://cr.openjdk.java.net/~bpb/8078891/webrev.01/ 
>> 
> The bug summary might be a big misleading as SequenceInputStream isn't thread 
> safe and close isn't atomic. The changes are really to attempt to close all 
> the remaining streams rather than bailing out when closing one of them fails.
> 
> In any case, the change looks okay but it might be simpler to restructure to 
> use:
> 
> while (in != null) {
> try { in.close() } catch (IOException e) { ... }
> peekNextStream();
> }
> 
> to simplify the error handling and avoid mixing nextStream and peekNextStream.

So updated: http://cr.openjdk.java.net/~bpb/8078891/webrev.02/

> There is also some curious code in peekNextStream where it handles 
> nextElement returning null - I don't know the history on that but it looks 
> like it's trying to handling a broken Enumeration.

Strange but not changed.

Thanks,

Brian

Re: 8078891: java.io.SequenceInputStream.close is not atomic and not idempotent

2019-07-26 Thread Alan Bateman

On 26/07/2019 16:41, Brian Burkhalter wrote:

:
Please see the updated patch which switches the implementation to the amended 
version and incorporates the proposed change to the test:

http://cr.openjdk.java.net/~bpb/8078891/webrev.01/
The bug summary might be a big misleading as SequenceInputStream isn't 
thread safe and close isn't atomic. The changes are really to attempt to 
close all the remaining streams rather than bailing out when closing one 
of them fails.


In any case, the change looks okay but it might be simpler to 
restructure to use:


while (in != null) {
    try { in.close() } catch (IOException e) { ... }
    peekNextStream();
}

to simplify the error handling and avoid mixing nextStream and 
peekNextStream. There is also some curious code in peekNextStream where 
it handles nextElement returning null - I don't know the history on that 
but it looks like it's trying to handling a broken Enumeration.


-Alan







Re: RFR 8226808: PreparedStatement javadoc typo

2019-07-26 Thread Brian Burkhalter
++1

Brian

> On Jul 26, 2019, at 11:29 AM, Joe Wang  wrote:
> 
> +1, and +1 for the salary at the time of JDK 1.1 ;-)
> 
> -Joe
> 
> On 7/26/19 11:17 AM, Lance Andersen wrote:
>> Hi all,
>> 
>> Please review this trivial change for JDK-8226808 which corrects a javadoc 
>> error that dates back to JDK 1.1 where the example for BigDecimal is not 
>> correct:
>> 
>> —
>> $ hg diff
>> diff -r 550a1a6ca596 
>> src/java.sql/share/classes/java/sql/PreparedStatement.java
>> --- a/src/java.sql/share/classes/java/sql/PreparedStatement.javaFri 
>> Jul 26 17:15:17 2019 +
>> +++ b/src/java.sql/share/classes/java/sql/PreparedStatement.javaFri 
>> Jul 26 14:13:09 2019 -0400
>> @@ -47,12 +47,13 @@
>>   * 
>>   * In the following example of setting a parameter, con 
>> represents
>>   * an active connection:
>> - * 
>> + * {@code
>> + *   BigDecimal sal = new BigDecimal("153833.00");
>>   *   PreparedStatement pstmt = con.prepareStatement("UPDATE EMPLOYEES
>>   * SET SALARY = ? WHERE ID = ?");
>> - *   pstmt.setBigDecimal(1, 153833.00)
>> - *   pstmt.setInt(2, 110592)
>> - * 
>> + *   pstmt.setBigDecimal(1, sal);
>> + *   pstmt.setInt(2, 110592);
>> + * }
>>   *
>>   * @see Connection#prepareStatement
>>   * @see ResultSet
>> ljanders-mac:open ljanders$
>> ———
>> 
>> Best,
>> Lance
>>  
>>   
>> 
>>  Lance Andersen| 
>> Principal Member of Technical Staff | +1.781.442.2037
>> Oracle Java Engineering
>> 1 Network Drive
>> Burlington, MA 01803
>> lance.ander...@oracle.com 
>> 
>> 
>> 
> 



Re: [14] RFR: 8228465: HOST locale provider holds wrong era name for GregorianCalendar in US locale

2019-07-26 Thread Lance Andersen
Hi Naoto,

This looks good

> On Jul 26, 2019, at 2:03 PM, naoto.s...@oracle.com wrote:
> 
> Hello,
> 
> Please review the fix to the following issue:
> 
> https://bugs.openjdk.java.net/browse/JDK-8228465
> 
> The proposed changeset is located at:
> 
> https://cr.openjdk.java.net/~naoto/8228465/webrev.00/
> 
> Similar issue was reported with JDK-8039301, in which 
> Calendar.getDisplayName() method was modified. The same fix needs to be 
> applied to Calendar.getDisplayNames().
> 
> Naoto

 
  

 Lance Andersen| 
Principal Member of Technical Staff | +1.781.442.2037
Oracle Java Engineering 
1 Network Drive 
Burlington, MA 01803
lance.ander...@oracle.com 





RFR 8226808: PreparedStatement javadoc typo

2019-07-26 Thread Lance Andersen
Hi all,

Please review this trivial change for JDK-8226808 which corrects a javadoc 
error that dates back to JDK 1.1 where the example for BigDecimal is not 
correct:

—
$ hg diff
diff -r 550a1a6ca596 src/java.sql/share/classes/java/sql/PreparedStatement.java
--- a/src/java.sql/share/classes/java/sql/PreparedStatement.javaFri Jul 
26 17:15:17 2019 +
+++ b/src/java.sql/share/classes/java/sql/PreparedStatement.javaFri Jul 
26 14:13:09 2019 -0400
@@ -47,12 +47,13 @@
  * 
  * In the following example of setting a parameter, con represents
  * an active connection:
- * 
+ * {@code
+ *   BigDecimal sal = new BigDecimal("153833.00");
  *   PreparedStatement pstmt = con.prepareStatement("UPDATE EMPLOYEES
  * SET SALARY = ? WHERE ID = ?");
- *   pstmt.setBigDecimal(1, 153833.00)
- *   pstmt.setInt(2, 110592)
- * 
+ *   pstmt.setBigDecimal(1, sal);
+ *   pstmt.setInt(2, 110592);
+ * }
  *
  * @see Connection#prepareStatement
  * @see ResultSet
ljanders-mac:open ljanders$
———

Best,
Lance
 
  

 Lance Andersen| 
Principal Member of Technical Staff | +1.781.442.2037
Oracle Java Engineering 
1 Network Drive 
Burlington, MA 01803
lance.ander...@oracle.com 





[14] RFR: 8228465: HOST locale provider holds wrong era name for GregorianCalendar in US locale

2019-07-26 Thread naoto . sato

Hello,

Please review the fix to the following issue:

https://bugs.openjdk.java.net/browse/JDK-8228465

The proposed changeset is located at:

https://cr.openjdk.java.net/~naoto/8228465/webrev.00/

Similar issue was reported with JDK-8039301, in which 
Calendar.getDisplayName() method was modified. The same fix needs to be 
applied to Calendar.getDisplayNames().


Naoto


Re: Review Request: JDK-8209005: Lookup.unreflectSpecial fails for default methods when Lookup.findSpecial works

2019-07-26 Thread Mandy Chung
Daniel noticed that `unreflectSpecial` is missing in the "Lookup Factory 
Methods" section in the class spec.  In fact there are a duplicated 
`lookup.unreflect(aMethod)` row that might originally be for 
`unreflectSpecial`.   I fix the javadoc in this patch:


http://cr.openjdk.java.net/~mchung/jdk14/8209005/webrev.01/

Mandy

On 7/25/19 1:12 PM, Mandy Chung wrote:
This patch fixes Lookup.unreflectSpecial to pass the declaring class 
of Method being unreflected (rather than null) so that it can 
accurately check if the special caller class is either the lookup 
class or a superinterface of the declaring class.


Webrev:
http://cr.openjdk.java.net/~mchung/jdk14/8209005/webrev.00/index.html

The test runs in both unnamed module and named module to cover 
JDK-8209078 which has been resolved by JDK-8173978.


thanks
Mandy




Re: 8078891: java.io.SequenceInputStream.close is not atomic and not idempotent

2019-07-26 Thread Daniel Fuchs

On 26/07/2019 17:41, Brian Burkhalter wrote:

Please see the updated patch which switches the implementation to the amended 
version and incorporates the proposed change to the test:

http://cr.openjdk.java.net/~bpb/8078891/webrev.01/


That looks good to me Brian.

I had a look at the API documentation of AutoCloseable::close()
Though the further behavior of the closed object when close throws
an exception is a bit fuzzy - your amended implementation has greater
compatibility with preexisting behavior of SequenceInputStream, and
is in line with what one would expect by reading AutoCloseable::close().

Thanks Pavel for suggesting this!

best regards,

-- daniel



Thanks,

Brian




Re: 8078891: java.io.SequenceInputStream.close is not atomic and not idempotent

2019-07-26 Thread Brian Burkhalter


> On Jul 26, 2019, at 4:14 AM, Pavel Rappo  wrote:
> 
> […] I prefer the amended version. As it
> both does the job and preserves the original behavior when consuming data from
> the current component stream.
> 
> […]
> 
> A nit on the accompanying test. Please consider this change:

Please see the updated patch which switches the implementation to the amended 
version and incorporates the proposed change to the test:

http://cr.openjdk.java.net/~bpb/8078891/webrev.01/

Thanks,

Brian

Re: RFR (M): JDK-6394757: rev1: AbstractSet.removeAll semantics are surprisingly dependent on relative sizes

2019-07-26 Thread Jens Lideström
Hello core-lib-dev,

I think it is a mistake to make `AbstractSet#removeAll` use the
membership semantics of the argument collection instead of those of the
receiver set, and to iterate over the receiver instead of the argument
collection.

I think that there are much stronger arguments for doing it the other
way around: to use the membership semantics of the receiver and to
iterate over the argument. This is what Stuart suggest in his first
prototype [1], but unlike the how it looks in the current webrev [2].

After having read through the original thread (most importantly this
message: [3]) and the review threads, it is my impression that the
disadvantages of this solution have been over-emphasised and the
advantages under-emphasised.

This is my impression of the advantages and disadvantages of making
`removeAll` use the membership semantics of the receiver set:

## Advantages

1. It leads to the expected behaviour when custom membership semantics
are used.
2. It preserves the behaviour in the common case when custom membership
semantics are used.
3. It has better performance for hashed sets in the common case.

## Disadvantages

4. It is different from `AbstractCollection`.
5. It is not possible to do in `retainAll`, and thus make the two
methods inconsistent. (See [3].)

I don't think 1-2 has been given enough weight in the discussion, and
that 4-5 have been given too much.

## Details

1. I think that often `removeAll` is called without much tough about the
properties of the argument collection, except for its members. It will
be confusing, annoying and error prone if code like this doesn't work as
expected:

var s = new TreeSet<>(String::compareToIgnoreCase);
s.addAll(List.of("a", "b", "c");
// Should remove "a" and "b"
s.removeAll(List.of("A", "B"));

Non-set collections on the other hand are probably much less likely to
have custom membership semantics. Because of this they can get away with
using the membership semantics of the argument, without exhibiting
unexpected behaviour for code that is similar to the above.

2. The consequences are even more sever if code like the above stops
working as expected. This will probably cause some code that has been
working for years (decades?) to suddenly fails.

* I think that it is much more common to use `removeAll` to remove a
small number of elements from a bigger set. If the membership semantics
of the argument are to be used in that case, then the behaviour of
existing code will change. If on the other hand the membership semantics
of the receiver are to be used, then existing code will continue to
behave in the same way.
* I think that existing code that works in the opposite way, that
depends on the custom membership semantics of the argument of
`removeAll`, is much more uncommon, and thus that it is more acceptable
that such code stops working.

3. Alan Snyder wrote about performance at length in his mails. This is
an important point, even if the arguments above are more important. For
example, some applications with large HashSet-based caches might stop
working in a satisfactory way.

4. I think that consistency with `AbstractCollection#removeAll` has been
given too much importance in the discussion. Sets are different from
lists, and it seem justified that the implementation of `removeAll` in
AbstractSet works in a different way from `AbstractCollection`.

* Collections that allow duplicates are forced to iterate `this` out
of necessity.
* But sets have no duplicates. It is natural take advantage of this
fact and iterate the argument. It seems unlikely that this would be
unexpected for anyone or lead to problems.
* Non-set collections are probably much less likely to have custom
equality semantics, which makes iterating over the argument less
problematic.

5. The difference in behaviour between `retainAll` and `removeAll` that
is the result of using the membership semantics of the receiver is, I
think, the most serious disadvantage of this solution. This disadvantage
is not as strong as the advantages in 1-2 above, however.

Also, I think that there is some justification to it. I think
`retainAll` is perceived much more as a set theoretic operation than
`removeAll`. This makes it more intuitive for a user that you should
ensure that the argument to that method is a collection with the right
membership semantics.

### Conclusion

I think the following original suggestion [1] is a much better choice
for an implementation of `AbstractSet#removeAll` than the existing
implementation in `AbstractCollection`:

 public boolean removeAll(Collection c) {
 Objects.requireNonNull(c);
 boolean modified = false;
 for (Object e : c)
 modified |= remove(e);
 return modified;
 }

[1]:
https://mail.openjdk.java.net/pipermail/core-libs-dev/2019-January/058172.html
[2]: http://cr.openjdk.java.net/~smarks/reviews/6394757/webrev.1/
[3]:

Re: RFR: JDK-8227684 : jpackage must handle win32 mangled names in jli.dll

2019-07-26 Thread Andy Herrick



On 7/25/2019 5:29 PM, Alexander Matveev wrote:

Do you know if all 32-bit builds will have "_JLI_Launch@56"?
Also, I think @56 is ordinal which can be used to load function by 
this number instead of the name, so I do not think you need to specify 
it.


My suggestion is to try "JLI_Launch" first and then "_JLI_Launch" and 
then "_JLI_Launch@56", if we doing 32-bit build.


Thanks,
Alexander

Oracle does not support, build, or test the jdk (or jpackage in 
particular) on 32 bit platforms, nor use jpackage to package 32 bit apps.


We will take submitted fixes (such as this one) from others who do use 
32 bit builds, provided they do not break our 64 bit builds in any way.


That being said I think we should take this fix "as is", and allow 
Robert (or any other contributor who is using 32 bits) to propose any 
further 32 bit related changes,


/Andy


On 7/25/2019 8:22 AM, Kevin Rushforth wrote:

It looks fine.

-- Kevin


On 7/25/2019 6:20 AM, Andy Herrick wrote:

Please review the jpackage fix for bug [1] at [2].

This is a fix for the JDK-8200758-branch branch of the open sandbox 
repository (jpackage).



[1] https://bugs.openjdk.java.net/browse/JDK-8227684

[2] http://cr.openjdk.java.net/~herrick/8227684/webrev.01/

Submitted by: Robert Lichtenberger

/Andy







Re: 8078891: java.io.SequenceInputStream.close is not atomic and not idempotent

2019-07-26 Thread Pavel Rappo
> On 25 Jul 2019, at 21:01, Brian Burkhalter  
> wrote:



> This concern would be fixed by changing the proposed patch as
> 
> --- a/src/java.base/share/classes/java/io/SequenceInputStream.java
> +++ b/src/java.base/share/classes/java/io/SequenceInputStream.java
> @@ -91,13 +91,10 @@
>   * Continues reading in the next stream if an EOF is reached.
>   */
>  final void nextStream() throws IOException {
> -try {
> -if (in != null) {
> -in.close();
> -}
> -} finally {
> -peekNextStream();
> +if (in != null) {
> +in.close();
>  }
> +peekNextStream();
>  }
>  
>  private void peekNextStream() {
> @@ -233,6 +230,7 @@
>  } else {
>  ioe.addSuppressed(e);
>  }
> +peekNextStream();
>  }
>  } while (in != null);
>  if (ioe != null) {
> 
> That is to say reverting the nextStream() change and adding peekNextStream() 
> in the catch branch of close(). I actually had it this way in a version I did 
> not post.
> 
> The scenario you suggest however would seem to be somewhat of a coding error 
> by the caller. I would think that once a sequence stream were created the 
> component streams should not be used.

When I filed this bug I was mainly concerned with the fact that `close` may end
up closing only some of the component streams. Both your original and amended
patches seem to fix this problem. However, I prefer the amended version. As it
both does the job and preserves the original behavior when consuming data from
the current component stream.

The scenario I mentioned in my previous email was about continuous attempts to
read from the SequenceInputStream, not about reading component streams
"out-of-band" (bypassing the aggregating SequenceInputStream). Sorry if I
didn't make myself clear.

This scenario doesn't seem to be a programming error, albeit a very strange use
case. I must admit I haven't seen any code in the wild doing so. Yet the spec
does not seem to prohibit this.

A nit on the accompanying test. Please consider this change:

diff -r 5a65ed97e38b test/jdk/java/io/SequenceInputStream/Close.java
--- a/test/jdk/java/io/SequenceInputStream/Close.java   Fri Jul 26 12:10:08 
2019 +0100
+++ b/test/jdk/java/io/SequenceInputStream/Close.java   Fri Jul 26 12:12:29 
2019 +0100
@@ -27,17 +27,15 @@
  */
 
 import java.io.ByteArrayInputStream;
-import java.io.InputStream;
 import java.io.IOException;
 import java.io.SequenceInputStream;
-import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Enumeration;
 import java.util.List;
 
 public class Close {
 
-public static void main(String[] argv) throws IOException {
+public static void main(String[] argv) {
 byte[] buf = new byte[42];
 
 List list = List.of(new CloseableBAOS(buf),
@@ -74,12 +72,12 @@
 
 static class CloseableBAOS extends ByteArrayInputStream{
 private boolean closed;
-private int numCloses = 0;
 
 CloseableBAOS(byte[] buf) {
 super(buf);
 }
 
+@Override
 public void close() throws IOException {
 closed = true;
 throw new IOException();




Re[2]: Benchmarking of new optimized Dual-Pivot Quicksort

2019-07-26 Thread Vladimir Yaroslavskiy
Hi Laurent,

Many thanks for feedback!

I run tests on two computers under Windows (8 processors) and Linux (88 
processors).

The details of the first computer is: Window 10 (64-bit), Intel Core i7 8650U 
1.90 GHz
and output of "uname -a" command for the second computer is:
Linux rho 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 
x86_64 x86_64 GNU/Linux

The JDK on Windows is OpenJDK 64-Bit Server VM, build 14-ea+6-171,
on Linux - jdk13 latest version.

The setting for JVM running is (one option -Xbatch only):
java -Xbatch -classpath %CLASSES% Main

Also I uploaded the results from Linux computer for sequential sorting of int / 
long arrays.
details are there  
https://github.com/iaroslavski/sorting/tree/master/benchmarking/results
The files -s- and -p- are related to the first 
(Windows) computer,
the files -s88- and -p88- are related to the second 
(Linux) computer.

The summary is here:

[int] sequential sorting, gain:
Windows: size 1M: 0.31; size 30K: 0.28; size: 150: 0.15
      Linux: size 1M: 0.31; size 30K: 0.27; size: 150: 0.07

[long] sequential sorting, gain:
Windows: size 1M: 0.32; size 30K: 0.23; size 150: 0.12
      Linux: size 1M: 0.30; size 30K: 0.23; size 150: 0.05 You can see that 
sorting shows the same results on large and medium sizes for both types,
and it works slower on small size on Linux, but shows positive gains.

Thank you,
Vladimir

>Пятница, 26 июля 2019, 10:43 +03:00 от Laurent Bourgès 
>:
>
>Hi Vladimir,
>
>First Thank you for these impressive results: 15% to 70% overall gains is 
>amazing and will make a big difference, Congratulations !
>
>Could you publish the different test environment ?
>- HW: cpu (lscpu output)
>- OS version
>- JDK version + JVM settings 
>
>Ideally you or I could write a shell script to collect setup and write a 
>simple log file.
>
>PS: I suppose you tested DPQS only on intel cpus, could somebody test on ppc 
>or arm cpus as well ?
>
>Cheers,
>Laurent
>Le jeu. 25 juil. 2019 à 16:04, Vladimir Yaroslavskiy < vlv.spb...@mail.ru > a 
>écrit :
>>Hi all,
>>With the help of Laurent Bourges I run benchmarking of new optimized 
>>Dual-Pivot Quicksort
>>and summarized results. The tests were run for all types (int / long / ... / 
>>double), for both types:
>>sequential and parallel sorting, for small, medium and large sizes. The 
>>comparison was done
>>on several data types, such as: random, period, sorted, equal, stagger, 
>>shuffle.
>>Here is the summary of total gains. The gain is defined as (jdk_time - 
>>dpqs_time) / jdk_time,
>>where jdk_time is sorting time by the current jdk14 and dpqs_time is sorting 
>>time by new
>>optimized Dual-Pivot Quicksort. To get stable and reproducible results, min 
>>time of many
>>invocations is taken.
>>You can find all detailed results there
>>https://github.com/iaroslavski/sorting/tree/master/benchmarking/results
>>
>>Sources of benchmarking tests are there
>>https://github.com/iaroslavski/sorting/tree/master/benchmarking/sources
>>
>>You can see not good performance for float / double types (parallel sorting).
>>This behavior can be explained by existing bug in jdk, when NaNs and -0.0s
>>are not processed properly. New sorting has total losses for small float / 
>>double
>>arrays, few percents only. For all other cases new optimized sorting is 
>>winner.
>>
>>
>>[int]
>>sequential, Length = 150, Gain: 0.15
>>sequential, Length = 3, Gain: 0.28
>>sequential, Length = 100, Gain: 0.31
>>parallel 8, Length = 150, Gain: 0.14
>>parallel 8, Length = 3, Gain: 0.15
>>parallel 8, Length = 100, Gain: 0.14
>>[long]
>>sequential, Length = 150, Gain: 0.12
>>sequential, Length = 3, Gain: 0.23
>>sequential, Length = 100, Gain: 0.32
>>parallel 8, Length = 150, Gain: 0.11
>>parallel 8, Length = 3, Gain: 0.16
>>parallel 8, Length = 100, Gain: 0.24
>>parallel 88 processors, Length = 100, Gain: 0.05
>>[byte]
>>sequential, Length = 150, Gain: 0.06
>>sequential, Length = 3, Gain: 0.36
>>sequential, Length = 100, Gain: 0.37
>>parallel 8, Length = 150, Gain: 0.13
>>parallel 8, Length = 3, Gain: 0.73
>>parallel 8, Length = 100, Gain: 0.65
>>[char]
>>sequential, Length = 150, Gain: 0.10
>>sequential, Length = 3, Gain: 0.00
>>sequential, Length = 100, Gain: 0.17
>>parallel 8, Length = 150, Gain: 0.10
>>parallel 8, Length = 3, Gain: 0.33
>>parallel 8, Length = 100, Gain: 0.62
>>[short]
>>sequential, Length = 150, Gain: 0.14
>>sequential, Length = 3, Gain: 0.10
>>sequential, Length = 100, Gain: 0.18
>>parallel 8, Length = 150, Gain: 0.13
>>parallel 8, Length = 3, Gain: 0.41
>>parallel 8, Length = 100, Gain: 0.65
>>[float]
>>sequential, Length = 150, Gain: -0.02
>>sequential, Length = 3, Gain: 0.21
>>sequential, Length = 100, Gain: 0.24
>>parallel 8, Length = 150, Gain: -0.05
>>parallel 8, Length = 3, Gain: -0.04
>>parallel 8, 

Re: 8193072: File.delete() should remove its path from DeleteOnExitHook.files

2019-07-26 Thread Peter Levart

Hi Brian,

On 7/26/19 12:42 AM, Brian Burkhalter wrote:


On Jul 11, 2019, at 12:52 AM, Peter Levart > wrote:


On 7/11/19 9:47 AM, Peter Levart wrote:


http://cr.openjdk.java.net/~plevart/jdk-dev/8193072_File.undoDeleteOnExit/webrev.02/



Another thing to consider (done in above webrev.02) is what to do 
with unbalanced cancelDeleteOnExit(). I think it is better to throw 
exception than to silently ignore it. This way unintentional bugs can 
be uncovered which would otherwise just cause erratic behavior.


The above patch looks pretty good but unless I am not comprehending 
the code I think there may still be behavioral incompatibilities which 
might not be acceptable. For example, for the existing code base with 
the following sequence of operations


File file1, file2, file3;

file1.deleteOnExit();
file2.deleteOnExit();
file3.deleteOnExit();

the deletion order is file3, file2, file1.


...and the same order remains with the proposed patch for above sequence 
of operations.




For the proposed patch with the following sequence of operations

file1.deleteOnExit();
file2.deleteOnExit();
file3.deleteOnExit();
file1.cancelDeleteOnExit(); // file1 is removed from the map
file2.cancelDeleteOnExit(); // file2 is removed from the map
file2.createNewFIle();
file2.deleteOnExit(); // file2 is added back into the map
file1.createNewFIle();
file1.deleteOnExit(); // file1 is added back into the map

the deletion order is (I think) file1, file2, file3 which is the 
reverse of the order of initial registration.


Right, and above sequence of operations is something that is not present 
in current codebases as there is no cancelDeleteOnExit() method yet. So 
more to the point would be a comparison of behaviors with some code 
sequence that doesn't include the not yet present method. Suppose the 
following:


file1.deleteOnExit();
file1.createNewFile();
file2.deleteOnExit();
file2.createNewFile();
file3.deleteOnExit();
file3.createNewFile();

file1.delete();
file2.delete();

file2.deleteOnExit();
file2.createNewFile();
file1.deleteOnExit();
file1.createNewFile();

Yes, with above sequence the order of deletion using current codebase 
will be file3, file2, file1, while with the patch, it would be file1, 
file2, file3.



Of course it is conceivable to change the specification but that seems 
dangerous somehow.


Of course there could be a situation where the change of order mattered 
for the correct logic of the program - imagine an empty "signal" file 
which guarantees with its presence that another "payload" file is 
present and fully written/consistent. You would want to maintain such an 
invariant so you would 1st register the payloadFile.deleteOnExit() 
following with signalFile.deleteOnExit(). But most such schemes are 
one-way only. If you wanted to somehow delete the signal file and then 
re-create it later after updating the payload file, you might already be 
doing the re-registration in the correct order (if you are just 
repeating the same code sequence as in initial registration), but you 
are making a concurrency mistake anyway as you are faced with a race 
inherent to an ABA problem that has nothing to do with registration to 
delete files on exit.


So I argue that any working scheme that features multiple registrations 
of the same file must be because of dependencies among them stemming 
from the filesystem hierarchy. For example, you would 1st want to 
register a new File("/path/to/dir").deleteOnExit() followed by new 
File("/path/to/dir/file.ext").deleteOnExit() so that on exit, the file 
is deleted 1st and then the dir which must be empty for deletion to succeed.


In this hypothetical example, you might, at some point delete the file 
and unregister it (keeping the dir present and registered) and later 
re-register and re-create the file. The deletion will still work 
correctly in this case. Or you might want to delete the file and dir and 
unregister them for them to be re-registered and re-created later. If 
such code is present now (modulo de-registering) it probably performs 
the re-registration in the correct order already.


In other words, any code that changes the deletion order with the patch 
but doesn't with current codebase is re-registering the same file (in 
wrong order) relying on the fact that the file is already registered (in 
the correct order). Only such code could break, but the chances are 
great the it won't.




Also for the patch above for this sequence of operations

file1.deleteOnExit();
file2.deleteOnExit();
file3.deleteOnExit();
file1.cancelDeleteOnExit(); // file1 is removed from the map
file2.cancelDeleteOnExit(); // file2 is removed from the map
file2.createNewFIle();
file1.createNewFIle();

then file3 is deleted on exit but file1 and file2 are not which 
differs from current behavior.



Right, because there is "no current behavior" with above sequence of 
operations, because there is currently no cancelDeleteOnExit() method. 
So 

Re: The final optimized version of Dual-Pivot Quicksort (ver.19.1)

2019-07-26 Thread Laurent Bourgès
Hi all,

I updated the DPQS patch thanks to latest release from Vladimir ([image:
Pièces jointes]2019.07.25):
http://cr.openjdk.java.net/~lbourges/dpqs/dpqs-8226297/dpqs-8226297.1/

"
Updated version of Arrays and DualPivotQuicksort files, the summary of
changes:

1. Corrected minimal threshold for parallelism argument (0 -> 1)
2. Added constant for common parallelism
3. Updated byte / char / short counting sort.
"

Incremental patch change:
http://cr.openjdk.java.net/~lbourges/dpqs/dpqs-8226297/diff_dpqs_1_vs_0.log

Cheers,
Laurent


Le mer. 17 juil. 2019 à 17:12, Laurent Bourgès 
a écrit :

> Hi,
>
> I integrated locally the complete DPQS 19.2 patch from Vladimir
> Yaroslavskiy with up-to-date jdk14 (client) repository:
> build OK and jtreg passed OK (java/util/Arrays).
>
> Here is the corresponding webrev for review:
> http://cr.openjdk.java.net/~lbourges/dpqs/dpqs-8226297/webrev/
>
> Cheers,
> Laurent
>


Re: Benchmarking of new optimized Dual-Pivot Quicksort

2019-07-26 Thread Laurent Bourgès
Hi Vladimir,

First Thank you for these impressive results: 15% to 70% overall gains is
amazing and will make a big difference, Congratulations !

Could you publish the different test environment ?
- HW: cpu (lscpu output)
- OS version
- JDK version + JVM settings

Ideally you or I could write a shell script to collect setup and write a
simple log file.

PS: I suppose you tested DPQS only on intel cpus, could somebody test on
ppc or arm cpus as well ?

Cheers,
Laurent

Le jeu. 25 juil. 2019 à 16:04, Vladimir Yaroslavskiy  a
écrit :

> Hi all,
>
> With the help of Laurent Bourges I run benchmarking of new optimized
> Dual-Pivot Quicksort
> and summarized results. The tests were run for all types (int / long / ...
> / double), for both types:
> sequential and parallel sorting, for small, medium and large sizes. The
> comparison was done
> on several data types, such as: random, period, sorted, equal, stagger,
> shuffle.
>
> Here is the summary of total gains. The gain is defined as (jdk_time -
> dpqs_time) / jdk_time,
> where jdk_time is sorting time by the current jdk14 and dpqs_time is
> sorting time by new
> optimized Dual-Pivot Quicksort. To get stable and reproducible results,
> min time of many
> invocations is taken.
>
> You can find all detailed results there
> https://github.com/iaroslavski/sorting/tree/master/benchmarking/results
>
> Sources of benchmarking tests are there
> https://github.com/iaroslavski/sorting/tree/master/benchmarking/sources
>
> You
> can see not good performance for float / double types (parallel sorting).
> This behavior can be explained by existing bug in jdk, when NaNs and -0.0s
> are not processed properly. New sorting has total losses for small float /
> double
> arrays, few percents only. For all other cases new optimized sorting is
> winner.
>
>
> 
> [int]
>
> sequential, Length = 150, Gain: 0.15
> sequential, Length = 3, Gain: 0.28
> sequential, Length = 100, Gain: 0.31
> parallel 8, Length = 150, Gain: 0.14
> parallel 8, Length = 3, Gain: 0.15
> parallel 8, Length = 100, Gain: 0.14
>
> [long]
> sequential, Length = 150, Gain: 0.12
> sequential, Length = 3, Gain: 0.23
> sequential, Length = 100, Gain: 0.32
> parallel 8, Length = 150, Gain: 0.11
> parallel 8, Length = 3, Gain: 0.16
> parallel 8, Length = 100, Gain: 0.24
> parallel 88 processors, Length = 100, Gain: 0.05
>
> [byte]
> sequential, Length = 150, Gain: 0.06
> sequential, Length = 3, Gain: 0.36
> sequential, Length = 100, Gain: 0.37
> parallel 8, Length = 150, Gain: 0.13
> parallel 8, Length = 3, Gain: 0.73
> parallel 8, Length = 100, Gain: 0.65
>
> [char]
> sequential, Length = 150, Gain: 0.10
> sequential, Length = 3, Gain: 0.00
> sequential, Length = 100, Gain: 0.17
> parallel 8, Length = 150, Gain: 0.10
> parallel 8, Length = 3, Gain: 0.33
> parallel 8, Length = 100, Gain: 0.62
>
> [short]
> sequential, Length = 150, Gain: 0.14
> sequential, Length = 3, Gain: 0.10
> sequential, Length = 100, Gain: 0.18
> parallel 8, Length = 150, Gain: 0.13
> parallel 8, Length = 3, Gain: 0.41
> parallel 8, Length = 100, Gain: 0.65
>
> [float]
> sequential, Length = 150, Gain: -0.02
> sequential, Length = 3, Gain: 0.21
> sequential, Length = 100, Gain: 0.24
> parallel 8, Length = 150, Gain: -0.05
> parallel 8, Length = 3, Gain: -0.04
> parallel 8, Length = 100, Gain: -0.12
>
> [double]
> sequential, Length = 150, Gain: -0.03
> sequential, Length = 3, Gain: 0.19
> sequential, Length = 100, Gain: 0.23
> parallel 8, Length = 150, Gain: -0.05
> parallel 8, Length = 3, Gain: 0.05
> parallel 8, Length = 100, Gain: 0.15
>
> Thank you,
> Vladimir
>